ziglang / zig

General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
https://ziglang.org
MIT License
33.58k stars 2.45k forks source link

Remove `usingnamespace` #20663

Open mlugg opened 1 month ago

mlugg commented 1 month ago

This is a proposal to eliminate the usingnamespace keyword from the Zig programming language. I know this is a controversial one, but please read the whole thing before leaving any comments.

Overview

The usingnamespace keyword has existed in Zig for many years. Its functionality is to import all declarations from one namespace into another.

usingnamespace was formerly called use, and was actually considered for deletion in the past -- see #2014. Ultimately, Andrew decided not to remove it from the language, with the following main justifications:

In recent years, the usage of usingnamespace in first-party ZSF code has declined. Across the entire compiler, standard library, and compiler-rt, there is now precisely one usage of usingnamespace:

https://github.com/ziglang/zig/blob/9d9b5a11e873cc15e3f1b6e506ecf22c8380c87d/lib/std/c.zig#L35-L48

Every other grep result that shows up for usingnamespace is simply supporting code for it across the compiler pipeline. This leads us to revisit the decision to keep it in the language.

This is a proposal to delete usingnamespace from the language, with no direct replacement.

Why?

This proposal is not just to prune language complexity (although that is one motivation). In fact, there are three practical reasons I believe this language feature should be deleted.

Code Readability

A core tenet of Zig's design is that readability is favored over writability, because the majority of any programmer's time is inevitably spent reading code rather than writing it. Unfortunately, usingnamespace has a tendency to significantly harm the readability of code that uses it.

Consider, as an example, the singular usage of usingnamespace in the standard library. If I want to call the Linux libc function dl_iterate_phdr, where should I look to determine if this function is in std.c? The obvious answer would be std/c.zig, but this is incorrect. The usingnamespace here means that the definition of this function is actually in std/c/linux.zig! Similarly, if I want to discover which platforms this function is defined on, and how its signature differs between platforms, I have to check each file in std/c/*.zig, and compare the signatures between those files.

Without usingnamespace, the story would be completely different. dl_iterate_phdr would be defined directly in std/c.zig, perhaps with a switch per-target. All of the information about this function would be localized to the namespace which actually contains it: I could see, in the space of perhaps 10 lines, which platforms this function was defined on, and whether/how the signatures differ.

This is one example of a more general problem with usingnamespace: it adds distance between the "expected" definition of a declaration and its "actual" definition. Without usingnamespace, discovering a declaration's definition site is incredibly simple: you find the definition of the namespace you are looking in (for instance, you determine that std.c is defined by std/c.zig), and you find the identifier being defined within that type declaration. With usingnamespace, however, you can be led on a wild goose chase through different types and files.

Not only does this harm readability for humans, but it is also problematic for tooling; for instance, Autodoc cannot reasonably see through non-trivial uses of usingnamespace (try looking for dl_iterate_phdr under std.c in the std documentation).

Poor Namespacing

usingnamespace can encourage questionable namespacing. When declarations are stored in a separate file, that typically means they share something in common which is not shared with the contents of another file. As such, it is likely a very reasonable choice to actually expose the contents of that file via a separate namespace, rather than including them in a more general parent namespace. I often summarize this point as "Namespaces are good, actually".

For an example of this, consider std.os.linux.IoUring. In Zig 0.11.0, this type (at the time named IO_Uring) lived in a file std/os/linux/io_uring.zig, alongside many "sibling" types, such as SubmissionQueue. This file was imported into std.os.linux with a pub usingnamespace, resulting in namespacing like std.os.linux.SubmissionQueue. However, since SubmissionQueue is directly related to io_uring, this namespacing makes no sense! Instead, SubmissionQueue should indeed be namespaced within a namespace specific to io_uring. As it happens, this namespace is, well, the IoUring type. We now have std.os.linux.IoUring.SubmissionQueue, which is unambiguously better namespacing.

Incremental Compilation

A key feature of the Zig compiler, which is rapidly approaching usability, is incremental compilation: the ability for the compiler to determine which parts of a Zig program have changed, and recompile only the necessary code.

As a part of this, we must model "dependencies" between declarations in Zig code. For instance, when you write an identifier which refers to a container-level declaration, a dependency is registered so that if that declaration's type or value changes, this declaration must also be recompiled.

I don't intend to explain the whole dependency model in the compiler here, but suffice to say, there are some complexities. One complexity which is currently unsolved is usingnamespace. The current (WIP) implementation of incremental compilation essentially assumes that usingnamespace is never used; it is not modeled in the dependency system. The reason for this is that it complicates the model for the reasons we identified above: without usingnamespace, we can know all names within a namespace purely from syntax, whereas with usingnamespace, semantic analysis may be required. The Zig language has some slightly subtle rules about when usingnamespace declarations are semantically analyzed, which aims to reduce dependency loops. Chances are you have never thought about this -- that's the goal of those rules! However, they very much exist, and modelling them correctly in incremental compilation -- especially without performing a large amount of unnecessary re-analysis -- is a difficult problem. It's absolutely a surmountable one, but it may be preferable to simplify the language so that this complexity no longer exists.

Note that changing the language to aid incremental compilation, even if the changes are not strictly necessary, is something Andrew has explicitly stated he is happy to do. There is precedent for this in, for example, the recent changes to type resolution, which changed the language (by introducing the rule that all referenced types are fully resolved by the end of compilation) to simplify the implementation of incremental compilation.

Use Cases

This section addresses some common use cases of usingnamespace, and discusses alternative methods to achieve the same result without the use of this language feature.

Conditional Inclusion

usingnamespace can be used to conditionally include a declaration as follows:

pub usingnamespace if (have_foo) struct {
    pub const foo = 123;
} else struct {};

The solution here is pretty simple: usually, you can just include the declaration unconditionally. Zig's lazy compilation means that it will not be analyzed unless referenced, so there are no problems!

pub const foo = 123;

Occasionally, this is not a good solution, as it lacks safety. Perhaps analyzing foo will always work, but will only give a meaningful result if have_foo is true, and it would be a bug to use it in any other case. In such cases, the declaration can be conditionally made a compile error:

pub const foo = if (have_foo)
    123
else
    @compileError("foo not supported on this target");

Note that this does break feature detection with @hasDecl. However, feature detection through this mechanism is discouraged anyway, as it is very prone to typos and bitrotting.

Implementation Switching

A close cousin of conditional inclusion, usingnamespace can also be used to select from multiple implementations of a declaration at comptime:

pub usingnamespacee switch (target) {
    .windows => struct {
        pub const target_name = "windows";
        pub fn init() T {
            // ...
        }
    },
    else => struct {
        pub const target_name = "something good";
        pub fn init() T {
            // ...
        }
    },
};

The alternative to this is incredibly simple, and in fact, results in obviously better code: just make the definition itself a conditional.

pub const target_name = switch (target) {
    .windows => "windows",
    else => "something good",
};
pub const init = switch (target) {
    .windows => initWindows,
    else => initOther,
};
fn initWindows() T {
    // ...
}
fn initOther() T {
    // ...
}

Mixins

Okay, now we're getting to the big one. A very common use case for usingnamespace in the wild -- perhaps the most common use case -- is to implement mixins.

/// Mixin to provide methods to manipulate the `_counter` field.
pub fn CounterMixin(comptime T: type) type {
    return struct {
        pub fn incrementCounter(x: *T) void {
            x._counter += 1;
        }
        pub fn resetCounter(x: *T) void {
            x._counter = 0;
        }
    };
}

pub const Foo = struct {
    _counter: u32 = 0,
    pub usingnamespace CounterMixin(Foo);
};

Obviously this simple example is a little silly, but the use case is legitimate: mixins can be a useful concept and are used by some major projects such as TigerBeetle.

The alternative for this makes a key observation which I already mentioned above: namespacing is good, actually. The same logic can be applied to mixins. The word "counter" in incrementCounter and resetCounter already kind of is a namespace in spirit -- it's like how we used to have std.ChildProcess but have since renamed it to std.process.Child. The same idea can be applied here: what if instead of foo.incrementCounter(), you called foo.counter.increment()?

This can be elegantly achieved using a zero-bit field and @fieldParentPtr. Here is the above example ported to use this mechanism:

/// Mixin to provide methods to manipulate the `_counter` field.
pub fn CounterMixin(comptime T: type) type {
    return struct {
        pub fn increment(m: *@This()) void {
            const x: *T = @alignCast(@fieldParentPtr("counter", m));
            x._counter += 1;
        }
        pub fn reset(m: *@This()) void {
            const x: *T = @alignCast(@fieldParentPtr("counter", m));
            x._counter = 0;
        }
    };
}

pub const Foo = struct {
    _counter: u32 = 0,
    counter: CounterMixin(Foo) = .{},
};

This code provides identical effects, but with a usage of foo.counter.increment() rather than foo.incrementCounter(). We have applied namespacing to our mixin using zero-bit fields. In fact, this mechanism is more useful, because it allows you to also include fields! For instance, in this case, we could move the _counter field to CounterMixin. Of course, in this case that wouldn't be a mixin at all, but there are certainly cases where a mixin might require certain additional state, which this system allows you to avoid duplicating at the mixin's use site.

External Projects

This section will look over uses of usingnamespace in some real-world projects and propose alternative schemes to achieve the same effect.

Mach

I have discussed Mach's usages of usingnamespace with @slimsag in the past, and we agreed on migration mechanisms, so I won't go into too much detail here. A quick summary of a few of them:

TigerBeetle

ghostty

I won't explain any specific cases here, since ghostty is currently in closed beta. However, I have glanced through its usages of usingnamespace, and it appears most could be eliminated with a combination of (a) more namespacing and (b) occasional manual re-exports.

Snektron commented 1 month ago

How do you suggest to remove the usingnamespace in c.zig? Seems to me the proposed suggestion leads to a lot of code duplication.

mlugg commented 1 month ago

Good point, I should have addressed that. The proposed suggestion leads to relatively little duplicated code in reality. The bulky things are the definitions: those are just moved from c/*.zig to c.zig, resulting in a bigger but far more searchable file. If you take a look through e.g. std/c/linux.zig, there's actually really not much going on there:

We have already started some parts of this migration. It'll certainly be tedious to complete, but I don't think it will make the std code worse.

Migrating this code is naturally a prerequisite for this proposal: so, if that migration goes poorly, it will of course give us a good reason to rethink this proposal.

mikdusan commented 1 month ago

I propose instead of moving everything into a single lib/std/c.zig, we go with lib/std/os/NAME/c.zig; this will reduce the need for switch statements for things like pub const AT = and dispense with most if not all of the std.c.private namespace.

lib/std/std.zig

pub const c = switch (native_os) {
    .freebsd => os.freebsd.c,
    .linux => os.linux.c,
    .macos, .ios => os.darwin.c,
    else => ...,
};

and yes it is going to mean duplicating some extern "c" and other decls between os. But if there is sufficient need, those could be placed in a common_c.zig and re-decl'd in each os/NAME/c.zig where the same definition applies:

os/<NAME>/c.zig

const common = @import("../common_c.zig");

pub const read = common.read;
pub const write = common.write;

// things specific to this os
pub const AT = ...;
BratishkaErik commented 1 month ago

I have read that proposal before in Discord (I think every time such comments were started by mlugg?) and complete proposal here. For now I have very negative opinion about removing usingnamespace, but before sharing my arguments I would like to hear what others think about this, and how would you (in general) suggest to migrate existing code in zalgebra and zig-libgccjit packages (std.c critique does not work here because they are all in the same scope as containing struct, and they help readibility because you can read all related grouped stuff instead of millions of switches/ifs):

https://github.com/kooparse/zalgebra/blob/1ab7f25cd3f947289e98a617ae785607bbc5054e/src/generic_vector.zig#L46-L51

https://git.sr.ht/~bratishkaerik/zig-libgccjit/tree/9d9f3d6e47c748b3649a3ec9665c6b8fc0ab9f15/item/src/wrapper.zig#L339

ifreund commented 1 month ago

https://github.com/kooparse/zalgebra/blob/1ab7f25cd3f947289e98a617ae785607bbc5054e/src/generic_vector.zig#L46-L51

See "Implementation Switching" in the OP.

https://git.sr.ht/~bratishkaerik/zig-libgccjit/tree/9d9f3d6e47c748b3649a3ec9665c6b8fc0ab9f15/item/src/wrapper.zig#L339

See "Conditional Inclusion" in the OP.

mlugg commented 1 month ago

@BratishkaErik, here's an attempted translation of the usingnamespace in the first link. I don't have time to look at the second right now.

https://zigbin.io/cc01ce

By the way, please stop marking everything as inline without a reason. All you're doing is hurting the compiler.

aaal-dev commented 1 month ago

The «mixins» example went more complicated to read. That breaks the idea about readability that you has mentioned. Remove usingnamespace maybe not such bad idea. But there is some use-cases that could get worse. And you wrote an example by yourself. Maybe there need another keyword for case of type return. And not for struct @import

BratishkaErik commented 1 month ago

See "Implementation Switching" in the OP.

I rewrite just 2 functions from zalgebra as how I understood Implementation Switching. Before:

pub usingnamespace switch (dimensions) {
    2 => struct {
        pub inline fn new(vx: T, vy: T) Self {
            return .{ .data = [2]T{ vx, vy } };
        }

        pub inline fn toVec3(self: Self, vz: T) GenericVector(3, T) {
            return GenericVector(3, T).fromVec2(self, vz);
        }
    },
    3 => struct {
        pub inline fn new(vx: T, vy: T, vz: T) Self {
            return .{ .data = [3]T{ vx, vy, vz } };
        }
    },
    4 => struct {
        pub inline fn new(vx: T, vy: T, vz: T, vw: T) Self {
            return .{ .data = [4]T{ vx, vy, vz, vw } };
        }

        pub inline fn toVec3(self: Self) GenericVector(3, T) {
            return GenericVector(3, T).fromVec4(self);
        }
    },
    else => unreachable,
};

After:

pub const new = switch (dimensions) {
    2 => struct {
        inline fn new(vx: T, vy: T) Self {
            return .{ .data = [2]T{ vx, vy } };
        }
    }.new,
    3 => struct {
        inline fn new(vx: T, vy: T, vz: T) Self {
            return .{ .data = [3]T{ vx, vy, vz } };
        }
    }.new,
    4 => struct {
        inline fn new(vx: T, vy: T, vz: T, vw: T) Self {
            return .{ .data = [4]T{ vx, vy, vz, vw } };
        }
    }.new,
    else => unreachable,
};

pub const toVec3 = switch (dimensions) {
    2 => struct {
        inline fn toVec3(self: Self, vz: T) GenericVector(3, T) {
            return GenericVector(3, T).fromVec2(self, vz);
        }
    }.toVec3,
    3 => @compileError("toVec3 not defined for GenericVector(3, " ++ @typeName(T) ++ ")"),
    4 => struct {
        fn toVec3(self: Self) GenericVector(3, T) {
            return GenericVector(3, T).fromVec4(self);
        }
    }.toVec3,
    else => unreachable,
};

I personally see this as worse variant for both readability and compile errors. Before, you had 1 switch with every function defintions in it, all grouped logically, you had detailed compile errors if user calls missing function:

src/generic_vector.zig:83:27: error: struct 'generic_vector.GenericVector(3,f32)' has no member named 'toVec3'
                _ = &Self.toVec3;
                          ^~~~~~
src/generic_vector.zig:33:19: note: struct declared here
    return extern struct {
           ~~~~~~~^~~~~~

you had less lines of code to read, and ZLS works perfectly here (go-to definition sends to the actual function, autocomplete works with correct doc comments etc.). After, you have ~13 switches for every ~13 function, you have much worse compile errors (which you need to craft by yourself btw, so forget about machine-detecting, localization etc.):

src/generic_vector.zig:71:18: error: toVec3 not defined for GenericVector(3, f32)
            3 => @compileError("toVec3 not defined for GenericVector(3, " ++ @typeName(T) ++ ")"),
                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

, you need to scan every function to check what is the rule for inclusion of the function (maybe it switches not only on dimensions, etc.), you have ~13 verbose struct type declaration for every function.

If we'd had anonymous functions like in https://github.com/ziglang/zig/issues/4170 , it would regain some readibility (code would be ~same length as old code, and no structs-just-for-function), but will not fix compile errors.

mlugg commented 1 month ago

@BratishkaErik, please see my translation linked above.

hamish-milne commented 1 month ago
const x: *T = @alignCast(@fieldParentPtr("counter", m));

I don't really use mixins myself, but if I had to use the above every time - both @fieldParentPtr which is kind of an esoteric feature, and @alignCast which reeks of un-safety - I'd hate it. The 'old' version of this looks much, much cleaner.

Encouraging better namespacing is a good goal, but I think a reasonable compromise here is to bite the bullet and just support struct composition, the pattern being:

const Bar = struct { _counter: u32 = 0 };
const Foo = compose(Bar, CounterMixin(Bar));

Or something.

This wouldn't even need a builtin, except that function declarations can't be generated/copied with reflection right now. If there was a field like impl: *anyopaque within std.builtin.Type.Declaration, you could remove the usingnamespace as dedicated syntax and simply implement it in the standard library.

BratishkaErik commented 1 month ago

I saw you responce just now, sorry!

* `init` (formerly `new`; follow convention!) follows the "Implementation Switching" method described in the issue text.
* {to,from}Vec{2,3,4} also follows that strategy, but does so in such a way that avoids duplicate code and groups all these conversions nicely.

1) There's no deinit function, so it should not be named like that IMHO, it is misleading for a existing convention. 2) file-scope namespace is now polluted with a lot of functions which role is just to implement one function, you can call them accidentaly in tests in that file, your completion list is polluted by them (yes I know self parameter is not same, it will appear anyway when you write Self. etc.), it's harder to read implementation because you now need 2 go-to defintion commands (or if you are in the editor, two searches). It's also all dispersed on many-many lines. I can't notice how it is not less readable than older version. Also, again, compile errors are worse IMHO.

* `z`, `w`, `zMut`, `wMut` exist unconditionally; take advantage of lazy evaluation.

And now you have errors like "index 3 outside array of length 2" instead of more helpful "function is in fact not defined", also disadvantage in my book.

Note that this isn't a function -- just use a declaration. (up should get the same treatment).

I agree here, but it would be a breaking change.

By the way, please stop marking everything as inline without a reason. All you're doing is hurting the compiler.

Thank you. This code is very old so I don't remember why I put it here or if it was even myself, maybe even when stage1 was main compiler.

BratishkaErik commented 1 month ago

@BratishkaErik, please see my translation linked above.

I saw this, but decided to finish my reply because did not want to lose comments by new bugged Github SPA logic or what they added to the site. Already had that experience before with review comments erased just because other comment reloaded itself.

aaal-dev commented 1 month ago

I need to clarify that the readability for other people could be not that you maybe assume. Generally speaking there is two type of developers. First type is usually write code in big functions with visual control of code. There is no option for them to break big function into small ones. That is because small functions require several jumps to see all code flow. And this is what they do not want to do. Second kind of developers is love small functions, but they also love functional style of coding. Zig has no any functional behaviors. And as I assume that Zig is C competitor. So we have problem here. C developers is much more the first type

BratishkaErik commented 1 month ago

I don't really use mixins myself, but if I had to use the above every time - both @fieldParentPtr which is kind of an esoteric feature, and @alignCast which reeks of un-safety - I'd hate it. The 'old' version of this looks much, much cleaner.

I use this pattern a lot, IIRC all thanks to nektro, but 1) not in situration where usingnamespace would work better, and 2) while @fieldParentPtr do not appear in optimized generated code, alignCast still has runtime cost for checks in safe-enabled mode or scopes. I personally fix this by moving entire cast to separate function with @setRuntimeSafety(false), but can't speak for other people and whether it will be even worse for them.

notcancername commented 1 month ago

I am divided on this issue, but ultimately, I am leaning towards removing usingnamespace.

The removal of usingnamespace will make it more difficult to build cross-platform abstractions. The std.c example presented in the issue is significant because it highlights that providing alternative definitions for only some declarations requires either usingnamespace or a lot of code, as in:

const impl = switch(...) {...}; 

pub const foo = impl.foo;
pub const bar = impl.bar;
//...

Conditional inclusion is an important use case for usingnamespace. Unconditionally including a declaration that can't be used is inelegant.

The benefits of removing usingnamespace seem to outweigh the downsides, reducing the complexity of incremental compilation is a worthwhile goal. It's an unfortunate tradeoff.

aaal-dev commented 1 month ago

Case of usingnamespace switch and Conditional Inclusion creates a code that could went worse in some cases. Because of converting one fully controlled switch into the group of splitted switches looks like vulnerable behavior. Now there is a potential mistake to miss to change all splitted switches, and that mistake could take a place in much more simple way.

travisstaloch commented 1 month ago

Thanks @mlugg for doing this writeup. Without it I wouldn't have known how to do mixins without usingnamespace. I wanted to make sure it worked and that I understood so I removed usingnamespace from my vec.zig gist which uses mixins.

original with usingnamespace updated without usingnamespace difference

While these changes are less readable, I'm glad to know that this is still possible! :+1:

It would be nice if this were a little simpler as now I must accept *const @This() and do @alignCast(@fieldParentPtr(...)) to get a *const Self. I did try this accepting @This() by value and referencing it, but that resulted in

/tmp/tmp.zig:26:71: error: pointer computation here causes undefined behavior
            const self: *const Self = @alignCast(@fieldParentPtr("m", &m));
                                                                      ^~
/tmp/tmp.zig:26:71: note: resulting pointer exceeds bounds of containing value which may trigger overflow
mlugg commented 1 month ago

@hamish-milne

const x: *T = @alignCast(@fieldParentPtr("counter", m));

I don't really use mixins myself, but if I had to use the above every time - both @fieldParentPtr which is kind of an esoteric feature

What do you mean by "esoteric" here? @fieldParentPtr is a lovely feature of Zig, and used somewhat often.

and @alignCast which reeks of un-safety - I'd hate it. The 'old' version of this looks much, much cleaner.

You could align the mixin field so as to omit the @alignCast, but to be honest, the way I wrote it is just easier. You can easily wrap the nasty bit up in a helper function:

/// Mixin to provide methods to manipulate the `_counter` field.
pub fn CounterMixin(comptime T: type, comptime field_name: []const u8) type {
    return struct {
        const Ptr = *@This();
        fn get(m: Ptr) *T {
            return @alignCast(@fieldParentPtr(field_name, m));
        }
        pub fn increment(m: Ptr) void {
            get(m)._counter += 1;
        }
        pub fn reset(m: Ptr) void {
            get(m)._counter = 0;
        }
    };
}

pub const Foo = struct {
    _counter: u32 = 0,
    counter: CounterMixin(Foo, "counter") = .{},
};

It's worth noting that even in projects that use them, mixins are not commonly used: they are rarely appropriate in Zig. For instance, TigerBeetle has two "mixin-esque" types used with usingnamespace, and one of them (JniInterface) should not be a mixin at all.

Encouraging better namespacing is a good goal, but I think a reasonable compromise here is to bite the bullet and just support struct composition, the pattern being:

const Bar = struct { _counter: u32 = 0 };
const Foo = compose(Bar, CounterMixin(Bar));

Or something.

This almost certainly won't ever be a thing. Composing types is awkward in terms of defining things like field merging behavior, reconciling differing struct attributes, etc. It also really lacks general usefulness. If you're trying to compose two types, we have a convenient, intuitive, existing system for doing so: fields!

This wouldn't even need a builtin, except that function declarations can't be generated/copied with reflection right now. If there was a field like impl: *anyopaque within std.builtin.Type.Declaration, you could remove the usingnamespace as dedicated syntax and simply implement it in the standard library.

Please refer to this comment for why this is not going to happen.

mlugg commented 1 month ago

@BratishkaErik

I saw you responce just now, sorry!

* `init` (formerly `new`; follow convention!) follows the "Implementation Switching" method described in the issue text.
* {to,from}Vec{2,3,4} also follows that strategy, but does so in such a way that avoids duplicate code and groups all these conversions nicely.
  1. There's no deinit function, so it should not be named like that IMHO, it is misleading for a existing convention.

Things with init do not necessarily have a corresponding deinit; see std.BoundedArray, std.Random.DefaultPrng, std.Target.DynamicLinker, std.PackedIntArray, std.EnumSet, std.StaticBitSet...

2) file-scope namespace is now polluted with a lot of functions which role is just to implement one function, you can call them accidentaly in tests in that file, your completion list is polluted by them (yes I know self parameter is not same, it will appear anyway when you write Self. etc.),

If you're worried about this, you can namespace them under internal or something, but I really don't see this as a problem.

it's harder to read implementation because you now need 2 go-to defintion commands (or if you are in the editor, two searches).

But OTOH, tooling can find the definition more reliably. Even if it can't do all of the work, the first goto-def is trivial, and the second can be done by hand (or... by eyes?) very easily.

It's also all dispersed on many-many lines.

The old implementation was spread across 36 lines, while the new implementation is spread across 41 lines. That is really not a significant increase. In addition, the logic is now all actually dense; previously, similar functions -- perhaps even functions which called one another (e.g. toVec4 for the 2D case was implemented in terms of fromVec2 for the 4D case) -- were quite far separated in the source code.

I can't notice how it is not less readable than older version.

Also, again, compile errors are worse IMHO.

* `z`, `w`, `zMut`, `wMut` exist unconditionally; take advantage of lazy evaluation.

And now you have errors like "index 3 outside array of length 2" instead of more helpful "function is in fact not defined", also disadvantage in my book.

A "no such function" error is all well and good if you understand the implementation of the library, but is completely unhelpful if you're skimming the source code and "I can see the definition right there, why won't it work?", or if you're using some tooling like ZLS and its heuristics got the included methods wrong, etc. You can reasonably argue that the error for things like z is as bad as the missing-method error (although I think that one's actually pretty understandable; it's exactly what I would expect from e.g. my_vec2[2] in a language with operator overloading), but if you believe that, just introduce your own @compileErrors.

Note that this isn't a function -- just use a declaration. (up should get the same treatment).

I agree here, but it would be a breaking change.

By the way, please stop marking everything as inline without a reason. All you're doing is hurting the compiler.

Thank you. This code is very old so I don't remember why I put it here or if it was even myself, maybe even when stage1 was main compiler.

rohlem commented 1 month ago

@BratishkaErik with the risk of restating something obvious,

I believe the code structure of the `usingnamespace` example in your comment can be completely preserved by simply explicitly listing all the re-exported declarations:

```zig pub const DimensionImpl = switch (dimensions) { 2 => struct { pub inline fn new(vx: T, vy: T) Self { return .{ .data = [2]T{ vx, vy } }; } pub inline fn toVec3(self: Self, vz: T) GenericVector(3, T) { return GenericVector(3, T).fromVec2(self, vz); } }, 3 => struct { pub inline fn new(vx: T, vy: T, vz: T) Self { return .{ .data = [3]T{ vx, vy, vz } }; } }, 4 => struct { pub inline fn new(vx: T, vy: T, vz: T, vw: T) Self { return .{ .data = [4]T{ vx, vy, vz, vw } }; } pub inline fn toVec3(self: Self) GenericVector(3, T) { return GenericVector(3, T).fromVec4(self); } }, else => unreachable, }; pub const new = DimensionImpl.new; //still re-exported just as with usingnamespace pub const toVec3 = DimensionImpl.toVec3; //still either re-exported or compile error if referenced -> analyzed ```

ethernetsellout commented 1 month ago

I controversially agree that using zero-sized fields to emulate mixins is an adequate replacement for status quo, though the @alignCast(@fieldParentPtr("counter", m)) itself does look a bit awkward. Regardless, we could argue over how that usecase could be accommodated, but I really hope it's not through usingnamespace. This feature has lead to many a game of "where did this declaration come from" and I'd be happy to see it removed.

Also wanted to add that I feel no envy for the work the compiler has had to do to support this feature for so long. The 'subtle rules' you mention do not sound fun to write code for or design around.

slimsag commented 1 month ago

I'm here just to express that I am greatly in favor of this proposal because almost every time I encounter usingnamespace in practice, it has led to much worse code readability.

Mach engine is not very affected by this proposal because we have had several initiatives to banish usingnamespace from our codebase (although it has snuck back in a few times), and every time we have removed it it has led to improved code readability.

Big 👍 from me.

nektro commented 1 month ago

both the "Implementation Switching" and "Mixins" alternatives do not create better code imo.

The alternative to this is incredibly simple, and in fact, results in obviously better code: just make the definition itself a conditional

this would be inordinately tedious and having to reproduce the conditional for every decl would either make copy/pasting it error prone or force it to be a local decl which feels silly.

This can be elegantly achieved using a zero-bit field and @fieldParentPtr. Here is the above example ported to use this mechanism:

this is not a valid alternative to one of the main ways i use mixins. consider the following code from a web project im working on:

pub const User = struct {
    pub const table_name = "users";
    pub usingnamespace ox2.sql.TableTypeMixin(User);

    id: User.Id = .{ .id = 0 },
    uuid: ulid.ULID,
    provider: string,
    snowflake: string,
    name: string,
    joined_on: ox2.db.Time,
    last_updated: ox2.db.Time,
};

adding an extra field here is not possible because the type is a template for the database table specified by its table_name decl. additionally TableTypeMixin adds methods for interacting with said databse and this is included on all structs of this kind in the monorepo. some of the methods include:

const FE = std.meta.FieldEnum;
const FT = std.meta.FieldType;

pub fn byKey(alloc: std.mem.Allocator, comptime key: FE(T), value: FT(T, key)) !?T {
pub fn byKeyDesc(alloc: std.mem.Allocator, comptime key: FE(T), value: FT(T, key)) !?T {
pub fn byKeyCount(alloc: std.mem.Allocator, comptime key: FE(T), value: FT(T, key)) !u64 {
pub fn byKeyAnd(alloc: std.mem.Allocator, comptime key: FE(T), value: FT(T, key), comptime key2: FE(T), value2: FT(T, key2)) !?T {
pub fn byKeyAll(alloc: std.mem.Allocator, comptime key: FE(T), value: FT(T, key), comptime ord: Order) ![]T {
pub fn byKeyAndAll(alloc: std.mem.Allocator, comptime key: FE(T), value: FT(T, key), comptime key2: FE(T), value2: FT(T, key2), comptime ord: Order) ![]T {
pub fn update(self: *T, alloc: std.mem.Allocator, comptime key: FE(T), value: FT(T, key)) !void {
pub fn deleteBy(alloc: std.mem.Allocator, comptime key: FE(T), value: FT(T, key)) !void {

as i mentioned these methods are shared on every table struct of this kind so the readability is actually enhanced here, the usingnamespace means i dont have to copy/paste a decl definition into every type when a new method is added, and these operations make querying and updates extremely type safe and database backend agnostic.

i would make it through if usingnamespace was removed, but as it stands i very much enjoy using it in this way and feel it is a very expressive construct that strikes a balance between power and tedium when used sparingly.

mlugg commented 1 month ago

this would be inordinately tedious and having to reproduce the conditional for every decl would either make copy/pasting it error prone or force it to be a local decl which feels silly.

The condition is usually just something like builtin.target.os. If it's more complicated, I think it's perfectly reasonable to have a separate (non-pub) value deciding the implementation used or whatever.

this is not a valid alternative to one of the main ways i use mixins.

All of those functions other than update I would namespace as e.g. User.Db.byKey; this is a clear instance of "namespacing is good, actually". These functions are getting general information from (or storing general information to) a database, which is completely unclear from their names. We could name all the functions fromDbByKey or something, but that's the "pseudo-namespace" issue sneaking in: instead we use an actual namespace, and all of our problems go away. The Db namespacing is appropriate because this is a shared, special property of these functions: other functions on User (say, isLoggedIn) might perform interesting work ("business logic" so to speak), whereas these are specifically performing database accesses and no further work.

update falls under the same umbrella, but it's method-ey rather than function-ey. If there's just the one, perhaps user.dbUpdate; if there are multiple, then field-based namespacing (user.db.update) is appropriate. Again, namespacing such methods appropriately makes their role -- performing a database access, with no further context about the application, and no integrated logic -- clearer.

EDIT: I forgot to mention, you could combine these mechanisms easily:

pub const User = struct {
    pub const Db = ox2.sql.TableTypeMixin(User, "db", "users"); // btw, pass this name in rather than having a random decl lookup!
    db: Db = .{},

    // (real fields here)
};

And also, just to be clear, TableTypeMixin knows to ignore the db field wrt the database table structure, because it knows that this is the mixin field.

nektro commented 1 month ago

All of those functions other than update I would namespace as e.g. User.Db.byKey; this is a clear instance of "namespacing is good, actually".

i left it out for brevity but in context it is very obvious what this type is for because its accessed as db.User and i dont want to go against Avoid Redundant Names in Fully-Qualified Namespaces either.

here's some more code that illustrates how its currently used in context: (this particular example is code in the handler for /{repo}/-/issues/{number})

    const issueno = std.fmt.parseUnsigned(u64, subpage, 10) catch return fail(response, body_writer, .not_found);
    const issue = try db.Issue.byKeyAnd(alloc, .repo, repoo.id, .number, issueno) orelse return fail(response, body_writer, .not_found);
    const all_labels = try db.IssueLabel.byKeyAll(alloc, .repo, repoo.id, .asc);

    switch (response.request.method) {
        .GET => {
            const actions = try db.IssueAction.byKeyAll(alloc, .issue, issue.id, .asc);
            var name_cache = db.User.IdCache.init(alloc);

            const page = files.@"/issue.pek";
            const tmpl = comptime pek.parse(page);
            try response.headers.append("content-type", "text/html");
            try response.headers.append("x-page", "issue");
            try pek.compile(@This(), alloc, body_writer, tmpl, .{
                //...html template keys

And also, just to be clear, TableTypeMixin knows to ignore the db field wrt the database table structure, because it knows that this is the mixin field.

that could work but it adds a layer of indirection that i dont feel adds much value here other than being a clear workaround for the lack of usingnamespace

zkburke commented 1 month ago

I'm in favour of removing usingnamespace. Implementation switching can be even more robust than proposed (in my opinion) by switching on entire structs (each containing an implementation of every function) verses switching per function:

Take this example from one of my projects:

///Implementation structure
const Impl = switch (quanta_options.windowing.preferred_backend) {
    .wayland => @compileError("Wayland not supported"),
    .xcb => @import("xcb/XcbWindow.zig"),
    .win32 => @compileError("Win32 not supported"),
};

pub const Window = struct {
    //switched out at compile time of course
    impl: Impl,

    ///Some general documentation about the usage of this function
    pub fn implFunction(self: Window) void {
          return self.impl.implFunction();
    }
};

comptime {
     //do some validation of implementations maybe
}

test "Maybe even do some general behaviour tests" {

}

And if a function isn't defined on the struct, with adequate naming, you get the good error messages which tell you if a field or function isn't defined on an a particular implementation struct. It also means all the implementations are grouped together based on the platform (platforms are just one example but it applies to vector mathematics as well).

This is more verbose but it allows you to:

This is my preferred way to do it, but it might be too cumbersome for some people's tastes, which I get.

michaelbartnett commented 1 month ago

Occasionally, this is not a good solution, as it lacks safety. Perhaps analyzing foo will always work, but will only give a meaningful result if have_foo is true, and it would be a bug to use it in any other case. In such cases, the declaration can be conditionally made a compile error:

pub const foo = if (have_foo)
    123
else
    @compileError("foo not supported on this target");

Note that this does break feature detection with @hasDecl. However, feature detection through this mechanism is discouraged anyway, as it is very prone to typos and bitrotting.

I don't have any major complaints againsts removing of usingnamespace, but we need a way for @hasDecl and iterating through decls from a @typeInfo to not immediately raise a compile error in comptime code for cases like this example of how to do feature detection.

System feature detection with @hasDecl is demonstrably error-prone, but not all other uses of @hasDecl are buggy. How else do you provide optional customization points, such as in std.fmt and std.json? This also makes introspection utilities prone to compilation failures. The methods discussed in #6620 to resolve this satisfy all reasonable use cases IMO, but that proposal has yet to land on a concrete consensus & be accepted.

This is relevant to removing usingnamespace because I anticipate seeing much more of these show up in std and other libraries after the removal since it's a recommended idiom. I have already have issues with std.ArrayList(non-u8) and periodically vendor and patch array_list.zig to work with parts of my code.

Jarred-Sumner commented 1 month ago

This is a proposal to eliminate the usingnamespace keyword from the Zig programming language.

I am a big 👎 to this without more thought going into addressing the problems usingnamespace solves.

mixins are not commonly used: they are rarely appropriate in Zig.

Mixins are important

Bun is missing from the list of projects here and it might be the largest user of usingnamespace outside of the standard library.

image

Speaking from our experience working on a ~250k LOC Zig codebase, mixins are an important usecase that encourages safer patterns and improves readability.

In Bun, we have two usecases for mixins

image

Mixing handwritten code with generated code

To work with types from JavaScript, there is a lot of redundant code that must exactly match up with C++ code.

To make this simpler, we use a code generator that outputs both C++ and Zig.

Currently, there are 70 generated classes with a total of 772 properties/class fields on those classes. This works out to about 12,000 lines of generated Zig and 46,000 lines of generated C++. This number will continue to grow over time (we are going to code generate function arguments soon, too)

It generates a long list of functions like this

    /// Return the pointer to the wrapped object.
    /// If the object does not match the type, return null.
    pub fn fromJS(value: JSC.JSValue) ?*Stats {
        if (comptime Environment.enable_logs) zig("<r>Stats<d>.<r>fromJS", .{});
        return Stats__fromJS(value);
    }

    /// Return the pointer to the wrapped object only if it is a direct instance of the type.
    /// If the object does not match the type, return null.
    /// If the object is a subclass of the type or has mutated the structure, return null.
    /// Note: this may return null for direct instances of the type if the user adds properties to the object.
    pub fn fromJSDirect(value: JSC.JSValue) ?*Stats {
        if (comptime Environment.enable_logs) zig("<r>Stats<d>.<r>fromJSDirect", .{});
        return Stats__fromJSDirect(value);
    }

    extern fn StatsPrototype__atimeSetCachedValue(JSC.JSValue, *JSC.JSGlobalObject, JSC.JSValue) callconv(JSC.conv) void;

    extern fn StatsPrototype__atimeGetCachedValue(JSC.JSValue) callconv(JSC.conv) JSC.JSValue;

    /// `Stats.atime` setter
    /// This value will be visited by the garbage collector.
    pub fn atimeSetCached(thisValue: JSC.JSValue, globalObject: *JSC.JSGlobalObject, value: JSC.JSValue) void {
        JSC.markBinding(@src());
        StatsPrototype__atimeSetCachedValue(thisValue, globalObject, value);
    }

    /// `Stats.atime` getter
    /// This value will be visited by the garbage collector.
    pub fn atimeGetCached(thisValue: JSC.JSValue) ?JSC.JSValue {
        JSC.markBinding(@src());
        const result = StatsPrototype__atimeGetCachedValue(thisValue);
        if (result == .zero)
            return null;

        return result;
    }
Along with a lot of C++ that looks like this: ```c++ JSC_DEFINE_CUSTOM_GETTER(jsStatsConstructor, (JSGlobalObject * lexicalGlobalObject, EncodedJSValue thisValue, PropertyName)) { VM& vm = JSC::getVM(lexicalGlobalObject); auto throwScope = DECLARE_THROW_SCOPE(vm); auto* globalObject = reinterpret_cast(lexicalGlobalObject); auto* prototype = jsDynamicCast(JSValue::decode(thisValue)); if (UNLIKELY(!prototype)) return throwVMTypeError(lexicalGlobalObject, throwScope, "Cannot get constructor for Stats"_s); return JSValue::encode(globalObject->JSStatsConstructor()); } JSC_DEFINE_CUSTOM_GETTER(StatsPrototype__atimeGetterWrap, (JSGlobalObject * globalObject, EncodedJSValue encodedThisValue, PropertyName attributeName)) { auto& vm = globalObject->vm(); auto throwScope = DECLARE_THROW_SCOPE(vm); JSStats* thisObject = jsDynamicCast(JSValue::decode(encodedThisValue)); if (UNLIKELY(!thisObject)) { return JSValue::encode(jsUndefined()); } JSC::EnsureStillAliveScope thisArg = JSC::EnsureStillAliveScope(thisObject); if (JSValue cachedValue = thisObject->m_atime.get()) return JSValue::encode(cachedValue); JSC::JSValue result = JSC::JSValue::decode( StatsPrototype__atime(thisObject->wrapped(), globalObject) ); RETURN_IF_EXCEPTION(throwScope, {}); thisObject->m_atime.set(vm, thisObject, result); RELEASE_AND_RETURN(throwScope, JSValue::encode(result)); } extern JSC_CALLCONV void StatsPrototype__atimeSetCachedValue(JSC::EncodedJSValue thisValue, JSC::JSGlobalObject *globalObject, JSC::EncodedJSValue value) { auto& vm = globalObject->vm(); auto* thisObject = jsCast(JSValue::decode(thisValue)); thisObject->m_atime.set(vm, thisObject, JSValue::decode(value)); } extern JSC_CALLCONV JSC::EncodedJSValue StatsPrototype__atimeGetCachedValue(JSC::EncodedJSValue thisValue) { auto* thisObject = jsCast(JSValue::decode(thisValue)); return JSValue::encode(thisObject->m_atime.get()); } JSC_DEFINE_CUSTOM_SETTER(StatsPrototype__atimeSetterWrap, (JSGlobalObject * lexicalGlobalObject, EncodedJSValue encodedThisValue, EncodedJSValue encodedValue, PropertyName attributeName)) { auto& vm = lexicalGlobalObject->vm(); auto throwScope = DECLARE_THROW_SCOPE(vm); JSValue thisValue = JSValue::decode(encodedThisValue); if (!thisValue.isObject()) { return false; } JSObject *thisObject = asObject(thisValue); thisObject->putDirect(vm, attributeName, JSValue::decode(encodedValue), 0); return true; } JSC_DEFINE_CUSTOM_GETTER(StatsPrototype__atimeMsGetterWrap, (JSGlobalObject * lexicalGlobalObject, EncodedJSValue encodedThisValue, PropertyName attributeName)) { auto& vm = lexicalGlobalObject->vm(); Zig::GlobalObject *globalObject = reinterpret_cast(lexicalGlobalObject); auto throwScope = DECLARE_THROW_SCOPE(vm); JSStats* thisObject = jsDynamicCast(JSValue::decode(encodedThisValue)); if (UNLIKELY(!thisObject)) { return JSValue::encode(jsUndefined()); } JSC::EnsureStillAliveScope thisArg = JSC::EnsureStillAliveScope(thisObject); JSC::EncodedJSValue result = StatsPrototype__atimeMs(thisObject->wrapped(), globalObject); RETURN_IF_EXCEPTION(throwScope, {}); RELEASE_AND_RETURN(throwScope, result); } ```

Memory management help

Bun frequently has to allocate long-lived values that can only be freed after the garbage collector calls a finalization callback, when some asynchronous work completes, or both.

We would love to use arenas or have some better alternative to this, but we need the memory to free as soon as the work completes and it needs to be difficult for us to make a mistake.

For this, we use a reference-counting mixin or a memory allocation mixin that implements a new function, destroy function, ref function, etc.

We currently have 3 of these mixins, and there are at least 60 usages in declarations and 324 calls to .new.

pub fn NewThreadSafeRefCounted(comptime T: type, comptime deinit_fn: ?fn (self: *T) void) type {
    if (!@hasField(T, "ref_count")) {
        @compileError("Expected a field named \"ref_count\" with a default value of 1 on " ++ @typeName(T));
    }

    for (std.meta.fields(T)) |field| {
        if (strings.eqlComptime(field.name, "ref_count")) {
            if (field.default_value == null) {
                @compileError("Expected a field named \"ref_count\" with a default value of 1 on " ++ @typeName(T));
            }
        }
    }

    const output_name: []const u8 = if (@hasDecl(T, "DEBUG_REFCOUNT_NAME")) T.DEBUG_REFCOUNT_NAME else meta.typeBaseName(@typeName(T));

    const log = Output.scoped(output_name, true);

    return struct {
        pub fn destroy(self: *T) void {
            if (Environment.allow_assert) {
                assert(self.ref_count.load(.seq_cst) == 0);
            }

            bun.destroy(self);
        }

        pub fn ref(self: *T) void {
            const ref_count = self.ref_count.fetchAdd(1, .seq_cst);
            if (Environment.isDebug) log("0x{x} ref {d} + 1 = {d}", .{ @intFromPtr(self), ref_count, ref_count - 1 });
            bun.debugAssert(ref_count > 0);
        }

        pub fn deref(self: *T) void {
            const ref_count = self.ref_count.fetchSub(1, .seq_cst);
            if (Environment.isDebug) log("0x{x} deref {d} - 1 = {d}", .{ @intFromPtr(self), ref_count, ref_count -| 1 });

            if (ref_count == 1) {
                if (comptime deinit_fn) |deinit| {
                    deinit(self);
                } else {
                    self.destroy();
                }
            }
        }

        pub fn new(t: T) *T {
            const ptr = bun.new(T, t);

            if (Environment.enable_logs) {
                if (ptr.ref_count.load(.seq_cst) != 1) {
                    Output.panic("Expected ref_count to be 1, got {d}", .{ptr.ref_count.load(.seq_cst)});
                }
            }

            return ptr;
        }
    };
}

This instrumentation lets us conditionally log when a type is allocated and track how many of each type are allocated (in debug builds), which helps us spot memory leaks and some performance issues.

Without a better answer for the mixin usecases, I worry this proposal will directly and indirectly lead to more crashes, more mistakes, and cause us to ship slower.

aaal-dev commented 1 month ago

Maybe a mixin keyword is a right word for usingnamespace behavior inside struct. Maybe even @mixin(some_type). Because in example counter: CounterMixin(Foo, "counter") = .{}, we need duplicate name of variable with no purpose. Only just because we need to pass-through that name into «mixin» struct. There is no need for that variable but only for cast. And we still don't know where that variable came from inside «mixin» struct. So it is no help for readability

sin-ack commented 1 month ago

While the "mixin" alternative supplied in the OP covers some use-cases, I don't think it covers a use-case that I developed in zigSelf, which is common/generic implementations of object methods. Some examples:

https://github.com/sin-ack/zigself/blob/0610529f7a04ec0d0628abc3e89dafd45b995562/src/runtime/intrinsic_map.zig#L21 https://github.com/sin-ack/zigself/blob/0610529f7a04ec0d0628abc3e89dafd45b995562/src/runtime/objects/slots.zig#L184

The main issue is that conceptually, none of these mixins are "separate" from the object's own methods, since they all implement basic methods as part of the object's own interface. This is unlike the counter example because that example only implements an "aspect" that's tangential to the object itself, whereas these are core methods that define an object's identity. Additionally, compare IntrinsicMap vs. SlotsLikeObjectBase/AssignableSlotsMixin here: They both implement the same methods together, but there is no clear "field" that these mixins could be assigned to.

I agree with the rest of the proposal, but I think an affordance should be made for mixins, as they're super useful in being able to share common behavior across structs, especially considering Zig doesn't have a way to inherit behavior.

rohlem commented 1 month ago

@Jarred-Sumner

From what I understand you could get rid of `usingnamespace` in two ways:

```zig // option 1: explicitly import the declarations (instead of `usingnamespace` importing all of them) pub const S = struct { // ... actual contents, including fn deinit ... const generated = GeneratedMethods(S); //instead of `pub usingnamespace generated;` import the declarations one-by-one pub const fromJS = generated.fromJS; pub const fromJSDirect = generated.fromJSDirect; pub const StatsPrototype__atimeSetCachedValue = generated.StatsPrototype__atimeSetCachedValue; pub const StatsPrototype__atimeGetCachedValue = generated.StatsPrototype__atimeGetCachedValue; pub const atimeSetCached = generated.atimeSetCached; pub const atimeGetCached = generated.atimeGetCached; // ... const lifetime_management = NewThreadSafeRefCounted(S, S.deinit); // instead of `pub usingnamespace lifetime_management;` import the declarations one-by-one pub const destroy = lifetime_management.destroy; pub const ref = lifetime_management.ref; pub const deref = lifetime_management.deref; pub const new = lifetime_management.new; }; // option 2: add `@fieldParentPtr` hack to the functions and change self argument type, then add them as 0-size mixin member: pub const M = struct { // ... actual fields ... generated: GeneratedMethods(M), //accessed as instance.generated.fromJS() instead of instance.fromJS() lifetime_management: NewThreadSafeRefCounted(M, M.deinit), //accessed as instance.lifetime_management.ref() instead of instance.ref() // ... actual contents, including fn deinit ... }; ```

Explicit re-exports (Option 1) is a lot of boilerplate (roughly 772 lines for 772 declarations), but can be rather safely copy-pasted from a single source, and would be a single homogenous block that is (relatively) easily skipped when reading the code. Maybe 0-size mixin members (option 2) could turn out to less boilerplate overall if you can afford to additionally shorten the names, for example reserve g for generated and m for lifetime_management, then the new calls become f.e. instance.fromJS(...) -> instance.g.fromJS(...) and instance.ref() -> instance.m.ref().

Without a better answer for the mixin usecases

Your comment doesn't really mention the proposed alternatives (maybe just for brevity). Even without usingnamespace you have option 1 (more verbose re-exports in the type definition) or option 2 (0-size mixin field + additional layer when accessing the function) to transition the code to. Neither is quite as short as status-quo. Would you consider these alternatives sufficient/acceptable, or not?


@sin-ack You can explicitly re-export all declarations. If you would prefer the grouping for consistency across code, you still have the option of naming the category basic, memory, lifetime, or something like that. Or you could split it up into multiple categories that are easier to name, f.e. I'd come up with as, size, and finalization/fin - createObject could be re-exported directly since it seems like only a single function.

If a single category is fulfilled by combining two generated types, I think the only solution to combine them would be explicit re-exports: const Basic = struct{const a = A(); const b = B(); pub const getSizeInMemory = a.getSizeInMemory; pub const getSizeForCloning = b.getSizeForCloning;};

Snektron commented 1 month ago

Memory management help

Refcounting can also be implemented using a regular wrapper type. Since "mixins" cannot add fields, you are currently forced to manually add the refcount field in the refcounted type, and in my opinion that breaks mixin-style abstraction anyway. @mlugg's proposed alternative hides that somewhat, though you are currently still required to match up the field names.

Semantically a mixin here is just the same as a freestanding function or any other method. It seems to me what you are really using usingnamespace for is avoiding to having to write either a get() function to fetch the inner type of a RefCounted(T), imports/a fully qualified for freestanding functions that implement refcounting, or accessing a sub-field like thing.ref_count.retain(). In other words it's used as sytax sugar instead of something that absolutely critically needs to be implemented with usingnamespace because of some semantic issue.

The other alternative that i havent seen mentioned for mixins is to import all functiond separately:

pub ref = RefCounted.retain;
pub deref = RefCountef.deref;
pub destroy = RefCounted.destroy;
pub new = RefCounted.new;

180 extra lines (probably more for the other types), but you dont need to update the callsites.

sin-ack commented 1 month ago

You can explicitly re-export all declarations.

This is exactly what I want to avoid. This means that I will have tons of code churn everytime a method needs to be added. Many more object types will be created in the future, so this will just be an increasing cost.

If you would prefer the grouping for consistency across code, you still have the option of naming the category basic, memory, lifetime, or something like that.

Again, there is no clear separation. A single mixin can implement multiple different aspects. Splitting it up to multiple mixins does not make sense.

mlugg commented 1 month ago

@Jarred-Sumner

Your refcount example literally seems like the perfect example of the advantages of my mixin system. The refcounting stuff can be namespaced rather than awkwardly including the "pseudo-namespace" ref in the function names, and the field can be incorporated into the mixin rather than expecting a field on the parent struct. See https://zigbin.io/25c171 for an example.

Your explanation of generated code reads like you forgot to finish that section -- you don't actually give a concrete example of how you use usingnamespace here. But, assuming I've understood the gist of it -- why not just have the code generator include the definitions from your hand-written code? To be honest, I feel like autogenerated and handwritten code should be namespaced separately using my mixin system, but if you're averse to that, just have the code generator include your handwritten code and you can keep the same usage as today.

notcancername commented 1 month ago

In other words it's used as sytax sugar instead of something that absolutely critically needs to be implemented with usingnamespace because of some semantic issue.

This is where much of the usefulness of usingnamespace lies, disregarding conditional inclusion. Pasting the declarations leads to more code that is harder to read overall.

lawrence-laz commented 1 month ago

Solutions with usingnamespace provide better writing experience - a dedicated keyword results in less code, which means less effort and less room for mistakes.

Solutions without usingnamespace provide better readability - either via explicit enumeration of decls or simple composition, and increased cohesion of related logic.

Zig is aiming to favour readability, so this seems like a good move.

sin-ack commented 1 month ago

Solutions with usingnamespace provide better writing experience - a dedicated keyword results in less code, which means less effort and less room for mistakes.

Solutions without usingnamespace provide better readability - either via explicit enumeration of decls or simple composition, and increased cohesion of related logic.

Zig is aiming to favour readability, so this seems like a good move.

I disagree. The example that mlugg gave above for the Vec3 code looks significantly harder to read to me personally. It makes it very hard to understand the flow of methods present im the struct. I understand if this isn't the popular opinion, but I feel my code's readability would actually be worse if I had to re-export everything all the time.

lawrence-laz commented 1 month ago

I disagree. The example that mlugg gave above for the Vec3 code looks significantly harder to read to me personally.

It's hard to discuss these things objectively as they are subjective by nature I guess.

But, I think, besides the personal preference, there is also the factor of whether you are reading the code for the first time, or have already read it/wrote it and know where things are and/or are coming from. Former benefits directly from readability and being able to follow what's going on, while latter one maybe benefits from brevity as well, as you need just a quick scan, just a "reference" to remember what is going on.

It makes it very hard to understand the flow of methods present im the struct.

I think it can be shown that the "flow" of methods is harder to follow with usingnamespaces in these examples. Both code versions, with and without usingnamespace are relatively easy to understand as the thing being implemented is quite simple, so this is a bit of a biksheddy analysis, but oh well.

I read through the Mlugg's example top to bottom without any stop or any need to scroll back up to understand things, everything was always visible and nearby (higher cohesion), while reading the original code with usingnamespaces I had to stop and scroll back a few times (lower cohesion).

How it went with the original code with usingnamespace:

// Line 167
pub inline fn toVec2(self: Self) GenericVector(2, T) {   // <-- ok this converts self to vec2 which one was self again? (not visible in screen anymore) 
    return GenericVector(2, T).fromVec4(self);           // <-- oh it's calling `fromVec4` so Self must be vec4
                                                         // hm, so this is just a proxy function imported into vec4 that calls a function from vec2, let's see...
}

scroll up 120 lines to where vec2 is defined:

// Line 47
2 => extern struct { // <-- found it, now it must have a function fromVec4

scroll down 28 lines:

// Line 75
pub inline fn fromVec4(vec4: GenericVector(4, T)) Self {
    return Self.new(vec4.x(), vec4.y()); // <-- right, so it just takes the x and y components
}
// ok, I can continue reading, where was I..? *looks for vec4 again*

How it went looking at the modified code without usingnamespace:

// Line 40
pub const toVec2 = switch (dimension) {
    3 => v3To2,   // <-- ok, so based on how many components we have, we call an appropriate conversion function
    4 => v4To2,
    else => @compileError("unsupported"),
};

scroll down 32 lines (through other variants of toVecX and fromVecX who all look the same)

// Line 72
const V2 = GenericVector(2, T);
const V3 = GenericVector(3, T);
const V4 = GenericVector(4, T); // <-- and these are all the conversion functions
fn v2To3(v: V2, z: T      ) V3 { return V3.init(v.x(), v.y(), z);        }   
fn v2To4(v: V2, z: T, w: T) V4 { return V4.init(v.x(), v.y(), z, w);     }
fn v3To2(v: V3            ) V2 { return V2.init(v.x(), v.y());           }
fn v3To4(v: V3, w: T      ) V4 { return V4.init(v.x(), v.y(), v.z(), w); }
fn v4To2(v: V4            ) V2 { return V2.init(v.x(), v.y());           }
fn v4To3(v: V4            ) V3 { return V3.init(v.x(), v.y(), v.z());    }
// ok, moving on *continue scrolling down*

The one with usingnamespace makes sense once you put in the work to understand it and definitely feels smarter or "cleaner", but the second one without usingnamespace is just stupid easy to read.

omaraaa commented 1 month ago

Hello,

Currently, I'm against removing usingnamespace unless better solution for mixins is implemented. My main issue with the current proposed mixin solution is that it make code harder to read. For example, it is hard to know if I'm mutating the field or the parent if I use the zero bit field strategy.

For example, say I use it in my library VecFns, then for the below :-

var p = Pos {};
p = p.vec.add(1);

Am I adding to p or vec? It's not clear.

I'll support removing usingnamespace if namespace composition functionality is provided. I'm fine with the below If you allow me to join a struct with fields with a namespace to generate a new type.

const Pos = WithVecFns(struct {...});

Maybe introduce something like @Namespace(this : type) {} builtin to the language. So it can be used like so :-


pub fn WithMyMixin(comptime T: type) type {
   return @Namespace/Mixin/orwhatever(T) {
     //decls here...
     //no fields allowed
   }
}
DerryAlex commented 1 month ago

Are zero bit types well defined in extern struct? If not, mixin may not be available to bindings.

xdBronch commented 1 month ago

yes, see https://github.com/ziglang/zig/issues/16394 and https://github.com/ziglang/zig/pull/16404

mnemnion commented 1 month ago

There's a subtle, but critical, distinction here: between changing the language to make the compiler better, on the one hand, and on the other hand, changing it to make life a bit easier for the compiler devs.

In my opinion, this proposal is an example of the latter. I'm against it for several reasons, but that's the first one.

Zig does not exist to compile Zig. This is, in fact, a well-known trap in language design: dogfooding a language by developing a compiler for it in that very language can easily lead to a language which is good for compiling itself, and not much else. Zig has largely avoided that trap. Let's not fall into it here.

When you say this:

The reason for this is that it complicates the model for the reasons we identified above: without usingnamespace, we can know all names within a namespace purely from syntax, whereas with usingnamespace, semantic analysis may be required.

I don't see how it can possibly be true. Perhaps what you mean is that the the syntactic analysis has to include the file or other container which usingnamespace imports?

That's fine though. You have to track that anyway. There are some good data structures which make this very easy in fact: they let you base a mapping "in front of" another mapping, so a usingnamespace call is modeled by resolving the base namespace and putting it behind the current one. Shadowing is, IIRC, forbidden, so you won't even get to the incremental stage if a usingnamespace declaration creates a shadow.

Autodoc cannot reasonably see through non-trivial uses of usingnamespace (try looking for dl_iterate_phdr under std.c in the std documentation).

Autodoc cannot right now reasonably see through non-trivial uses of usingnamespace. If this were impossible, then the compiler itself would be unable to detect name shadowing resulting from usingnamespace! Tooling immaturity in a language which is still rapidly evolving is a really bad reason to remove a feature from that language.

My case for keeping it is this:

usingnamespace Preserves a Single Source of Truth

That's it, everything else is derivative of this. A single source of truth reduces error and tedium. Removing usingnamespace will make the language more error-prone and tedious. It baffles me that a combination of @fieldParentPtr and @alignCast is being offered as an acceptable replacement here. That just becomes "we used to have an easy and elegant way of doing this, but someone didn't like it, it got removed, and now you have to do a bunch of rigamarole instead".

The rest of the proposal is, in my considered opinion, a bunch of "this is good for you actually" which is chaff to conceal the heart of it, which is wanting it to be easier to ship a certain feature. usingnamespace isn't blocking the feature, we've disposed of that, it's simply making it a bit more difficult.

As I said at the beginning, this is a bad reason to remove a useful feature from the language.

Here's your use case:

//! mylibrary.zig

pub usingnamespace @import("sub-library-one.zig");
pub usingnamespace @import("sub-library-two.zig");

Explain how we keep a single source of truth here, and I might be persuaded. But I'm not seeing it.

mlugg commented 1 month ago

@mnemnion

Zig does not exist to compile Zig.

This point doesn't seem relevant. Simplifying the language is a desirable goal for users and for literally all tooling. I am not suggesting to modify the language purely because it's easier for a compiler if that compiler is written in Zig; it's not like usingnamespace would be any easier to implement if the compiler were written in C, or Rust, or Python, or Haskell, or JavaScript.

When you say this:

The reason for this is that it complicates the model for the reasons we identified above: without usingnamespace, we can know all names within a namespace purely from syntax, whereas with usingnamespace, semantic analysis may be required.

I don't see how it can possibly be true. Perhaps what you mean is that the the syntactic analysis has to include the file or other container which usingnamespace imports?

This question indicates to me a lack of understanding of the usingnamespace construct. Its operand does not have to be an @import, but can be any Zig expression which resolves to a container type (struct, union, enum, opaque). For this reason, semantic analysis is required to determine what that type is. This could execute arbitrary comptime logic, so full semantic analysis capabilities are necessary.

Shadowing is, IIRC, forbidden, so you won't even get to the incremental stage if a usingnamespace declaration creates a shadow.

By the above, this is not true. In fact, detecting this kind of shadowing is a non-trivial multi-stage process, because of the rules in the compiler I mentioned in the original issue post.

Autodoc cannot reasonably see through non-trivial uses of usingnamespace (try looking for dl_iterate_phdr under std.c in the std documentation).

Autodoc cannot right now reasonably see through non-trivial uses of usingnamespace. If this were impossible, then the compiler itself would be unable to detect name shadowing resulting from usingnamespace! Tooling immaturity in a language which is still rapidly evolving is a really bad reason to remove a feature from that language.

This isn't about tooling immaturity; it's about tooling simplicity. Autodoc, by design, does not perform semantic analysis. Perhaps in future a similar tool will, providing some kind of "multibuilds" system to figure out how the API might differ across build options / targets / etc; but the current implementation of Autodoc avoids this in the name of simplicity. Simplicity is an explicit design goal of Zig.

usingnamespace Preserves a Single Source of Truth

I don't understand what this section of your comment is trying to say: you haven't explained what you mean by "single source of truth".

Your last example is a perfect instance of "namespacing is good, actually". You describe these files as "sub-libraries"; that's a really obvious indication that they're doing different things. That means they should be namespaced separately.

rohlem commented 1 month ago

@mlugg

direct responses/comments

>> Zig does not exist to compile Zig. >This point doesn't seem relevant. \[... I\]t's not like `usingnamespace` would be any easier to implement if the compiler were written in C, or Rust, or Python, or Haskell, or JavaScript. I understood that point to mean "Zig's end goal is not compiling Zig", but that Zig is rather meant to help the development of other software. So if it turns out developers using Zig prefer `usingnamespace` over the alternatives, the value gained by simplifying the language may not outweigh the benefit of having it. > \[`usingnamespace` 's\] operand does not have to be an `@import`, but can be any Zig expression which resolves to a container type (`struct`, `union`, `enum`, `opaque`). As far as I understand, all of the workarounds only improve on this locally. - If a status-quo `struct` contains `usingnamespace Impl;`, then `Impl` may require semantic analysis. Whether or not `instance.foo()` exists depends on semantic analysis. - Option A: make it a 0-bit field. Whether `instance.impl.foo()` exist is still up to semantic analysis of the 0-bit mixin field type. - Option B: make an alias `const foo = Impl.foo;`. Now the declaration exists, but equally leads to a compile error depending on semantic analysis of Impl. This does potentially catch a class of errors (referencing non-existing declarations) earlier in the pipeline. But the ultimate decision of whether what the declaration aliases exists still can't be made without semantic analysis. >> usingnamespace Preserves a Single Source of Truth >> \[... A single source of truth reduces error and tedium. ...\] > I don't understand what this section of your comment is trying to say: you haven't explained what you mean by "single source of truth". My interpretation, not 100% certain, is simply that one sentence. Having to manually do the job that `usingnamespace` in status-quo automates is tedious and error prone. (I personally think it's possible that without `usingnamespace` in the language, people will add a custom build/preprocessing step for doing the same thing it does now.) > Your last example is a perfect instance of "namespacing is good, actually". You describe these files as "sub-libraries"; that's a really obvious indication that they're doing different things. That means they should be namespaced separately. (I wouldn't assert that two libraries can't do similar enough things.) I think it's difficult to objectively say what the best way to write code is. I don't know if it's objectively better to include or exclude `.lib_a` or `.lib_b` in a field access sequence. I do think that people naturally gravitate towards trying to reduce work. Maybe the notion that `usingnamespace` is less work than the alternatives is ultimately an illusion/untrue. However, I'm currently not convinced that using more namespaces than in status-quo ultimately reduces work either. (For now) I can't really say one way or the other.


To me speeding up development of the incremental compilation feature seems to be a significant motivation behind this idea. I think it would be perfectly acceptable to not (or not fully) support usingnamespace in incremental mode until this proposal has been decided. At that point, maybe interest in incremental compilation has convinced people to implement the proposed workarounds in their projects, and they can share their experience to help with the decision. Or someone else contributes working support for usingnamespace in incremental mode - at which point only the remaining motivators behind it need to be considered.

(Maybe this has been the plan forward anyway, I couldn't really find these operational details in between the code discussion in this issue.)

I do think there's a lot of value in canonicalization, unification of coding style, and simplicity. That said, if an unclean feature helps someone in helping users, imo that's enough justification for using it. Together we serve the users.

mlugg commented 1 month ago

@mlugg direct responses/comments

Zig does not exist to compile Zig.

This point doesn't seem relevant. [... I]t's not like usingnamespace would be any easier to implement if the compiler were written in C, or Rust, or Python, or Haskell, or JavaScript.

I understood that point to mean "Zig's end goal is not compiling Zig", but that Zig is rather meant to help the development of other software. So if it turns out developers using Zig prefer usingnamespace over the alternatives, the value gained by simplifying the language may not outweigh the benefit of having it.

That's a reasonable point, but I don't think adds anything to the discussion (it's essentially just saying "there might be reasonable arguments against this proposal"), so let's move on.

[usingnamespace 's] operand does not have to be an @import, but can be any Zig expression which resolves to a container type (struct, union, enum, opaque).

As far as I understand, all of the workarounds only improve on this locally.

* If a status-quo `struct` contains `usingnamespace Impl;`, then `Impl` may require semantic analysis. Whether or not `instance.foo()` exists depends on semantic analysis.

* Option A: make it a 0-bit field. Whether `instance.impl.foo()` exist is still up to semantic analysis of the 0-bit mixin field type.

* Option B: make an alias `const foo = Impl.foo;`. Now the declaration exists, but equally leads to a compile error depending on semantic analysis of Impl.

This does potentially catch a class of errors (referencing non-existing declarations) earlier in the pipeline. But the ultimate decision of whether what the declaration aliases exists still can't be made without semantic analysis.

Knowing the declaration exists is important, though. From the lens of readability, this allows you to easily chase declarations through source code without having to check potentially arbitrarily many files (this is also useful for tooling), and moreover,you know which declarations exist in a given namespace without any semantic analysis. The problems usingnamespace causes with incremental compilation are more subtle, but I'll try and explain them briefly at the bottom of this response.

usingnamespace Preserves a Single Source of Truth [... A single source of truth reduces error and tedium. ...]

I don't understand what this section of your comment is trying to say: you haven't explained what you mean by "single source of truth".

My interpretation, not 100% certain, is simply that one sentence. Having to manually do the job that usingnamespace in status-quo automates is tedious and error prone.

A key point of my original post is that you shouldn't try and mimic the behavior of usingnamespace. It typically results in less readable code which is harder for the compiler to deal with. The examples I gave are not "workarounds for not having usingnamespace", but better patterns. Explicit namespacing is good, so I encouraged that, and encouraged it for mixins with the described mixin system; conditional inclusion of symbols is a bad idea (the only extra thing it allows is @hasDecl checks which are incredibly error-prone), so I proposed an alternative which simplifies the namespacing; etc.

(I personally think it's possible that without usingnamespace in the language, people will add a custom build/preprocessing step for doing the same thing it does now.)

I think you hugely overestimate how much usingnamespace is used in the wild. Bun is the biggest user of it, with 203 uses, most of which are simple mixins. ghostty has 51 uses of the keyword, which look relatively straightforward to eliminate, and IMO appear to make the code worse. Mach has a small handful, and Stephen has explicitly said he supports this proposal, noting that usingnamespace in practice seems to lead to much worse code readability. TigerBeetle too has a very small number. usingnamespace is not a widely used feature of Zig.

Your last example is a perfect instance of "namespacing is good, actually". You describe these files as "sub-libraries"; that's a really obvious indication that they're doing different things. That means they should be namespaced separately.

(I wouldn't assert that two libraries can't do similar enough things.) I think it's difficult to objectively say what the best way to write code is. I don't know if it's objectively better to include or exclude .lib_a or .lib_b in a field access sequence.

I do think that people naturally gravitate towards trying to reduce work. Maybe the notion that usingnamespace is less work than the alternatives is ultimately an illusion/untrue. However, I'm currently not convinced that using more namespaces than in status-quo ultimately reduces work either. (For now) I can't really say one way or the other.

I suspect we won't get anywhere with this discussion unless @mnemnion provides a real-world use case for usingnamespace here, rather than a generalized and idealized example.

To me speeding up development of the incremental compilation feature seems to be a significant motivation behind this idea. I think it would be perfectly acceptable to not (or not fully) support usingnamespace in incremental mode until this proposal has been decided.

Your analysis is incorrect, because for months already, the plan has been for the first incarnation of incremental compilation to not support usingnamespace. This proposal doesn't do anything to speed up us shipping the feature.

At that point, maybe interest in incremental compilation has convinced people to implement the proposed workarounds in their projects, and they can share their experience to help with the decision. Or someone else contributes working support for usingnamespace in incremental mode - at which point only the remaining motivators behind it need to be considered.

(Maybe this has been the plan forward anyway, I couldn't really find these operational details in between the code discussion in this issue.)

I do think there's a lot of value in canonicalization, unification of coding style, and simplicity. That said, if an unclean feature helps someone in helping users, imo that's enough justification for using it. Together we serve the users.


Since you seem to be interested, here's a quick summary of why usingnamespace is problematic for incremental compilation. This may require some compiler knowledge, and I don't have time to expand on it in further comments at this time -- it should suffice to say that these problems do exist. Nevertheless, here we go.

A key concept in incremental compilation is that different pieces of code can register "dependencies" on things like the value of a global declaration. One thing you might have a dependency on is "does the name X exist in the namespace of type Y" -- for instance, this can come from @hasDecl or @typeInfo. On an incremental update, we need to figure out if this dependency is "outdated" -- that is, whether the name has started/stopped existing in that namespace. Without usingnamespace, this is a really simple task, because the dependency can just store a reference to the ZIR namespace, so we know whether it has changed as soon as we finish AstGen. However, usingnamespace complicates this significantly, as we might need to re-analyze the usingnamespace declaration to determine the new namespace. This is also complicated for accessing declarations. When we access a declaration, we place a dependency on the resolved value of the corresponding Decl. Of course, these can only be determined to be up-to-date or outdated after some amount of semantic analysis. However, a field access which reaches through a usingnamespace requires a quite unreasonable amount of dependencies. Consider this code snippet:

const A = struct {
    usingnamespace B;
    usingnamespace C;
};
const B = struct {};
const C = struct {
    const x = 123;
};

Given the access A.x, which dependencies are necessary? Well, we depend on the value of the second usingnamespace declaration, and on the value of its x declaration. However, we're not done yet! We also depend on the fact that there is no declaration x directly in A, since this would lead to an ambiguous reference error. For the same reason, we also depend on the fact that there is no x in B, and also on the value of the first usingnamespace. We also, of course, depend on there not being any added usingnamespace declarations in A. We can't wrap all of this logic up into one kind of dependency, because usingnamespace has subtle rules (I mentioned these before) where it won't consider certain usingnamespaces in field accesses to avoid dependency loops -- for instance, this allows C to reference A without things breaking; so, we actually have to register this as 5-ish separate dependencies! The existence of recursive usingnamespace also makes constructing these dependencies relatively complex, since we need to track which namespaces have been "seen" so far. All in all, this could mean that a single field access could easily trigger hundreds of incremental dependencies when usingnamespace is involved (in fact there is no limit). We could probably design a dependency model which improves this situation, but the bottom line is that this is a hard problem stacked on top of the already-hard problem of incremental compilation.

alexrp commented 1 month ago

I was initially conflicted on the removal of usingnamespace, but seeing this:

Its operand does not have to be an @import, but can be any Zig expression which resolves to a container type (struct, union, enum, opaque). For this reason, semantic analysis is required to determine what that type is. This could execute arbitrary comptime logic, so full semantic analysis capabilities are necessary.

convinced me. I mean, it's not like I didn't know this, but as a user, I never really thought about and fully appreciated this and the broader language design implications while using the construct. Seeing it spelled out here made that click.

Putting on my PL design / compiler development hat, it does actually seem a bit crazy that you have to invoke the full comptime engine just to determine which declaration identifiers exist in a container - not even what they evaluate to, just their mere existence. It's the kind of language design wart I would personally not be willing to accept in my projects. Even setting aside incremental compilation, this is obviously all kinds of undesirable for something as basic as a language server.

So :+1: from me FWIW, and off I go to rewrite my usingnamespace usage.

sin-ack commented 1 month ago

I think you hugely overestimate how much usingnamespace is used in the wild. Bun is the biggest user of it, with 203 uses, most of which are simple mixins. ghostty has 51 uses of the keyword, which look relatively straightforward to eliminate, and IMO appear to make the code worse. Mach has a small handful, and Stephen has explicitly said he supports this proposal, noting that usingnamespace in practice seems to lead to much worse code readability. TigerBeetle too has a very small number. usingnamespace is not a widely used feature of Zig.

It's not a widely-used feature of Zig because Zig is not widely used. There are only a few large-scale projects outside of the Zig compiler, and most of them aren't feature-complete yet.

A key point of my original post is that you shouldn't try and mimic the behavior of usingnamespace. It typically results in less readable code which is harder for the compiler to deal with. The examples I gave are not "workarounds for not having usingnamespace", but better patterns.

"less readable" is very subjective. For me personally, the mixin use-case of usingnamespace both makes life significantly easier, and much less error-prone and fiddly to extend the code later on. Yes, obviously this is also subjective, but I think the freedom should be given to let the user decide how to organize the codebase.

Re: incremental compilation, I understand your plight, but I don't think "difficulty of implementation" should be a deciding factor on whether a feature that benefits user freedom and makes it simpler to use Zig should be in the language.

nektro commented 1 month ago

to provide more "in the wild" usage https://github.com/nektro/zig-extras/blob/74f0ddb/src/lib.zig

this library has a significant amount of churn at times and keeping the tests close to the implementations is great i used to have this all in one big file but it was unruly and it didnt have tests. the desire to add some without making the file even more difficult to navigate is what motivated the split in the first place

nektro commented 1 month ago

to provide more "in the wild" usage https://github.com/nektro/zig-intrusive-parser/blob/2336d8b/intrusive_parser.zig this is a shared module that gets used in many different parsers i author. it includes some shared state that is embedded in an independent field but also provides a mixin so that consumer parsers and provide an api that looks cohesive; eg methods from the intrusive one intentionally dont look different than extension methods since the use of the intrusive parser in the first place is otherwise an implementation detail.

https://github.com/nektro/zig-json/blob/2958707/json.zig#L254 example consumer usage

alexrp commented 1 month ago

Re: incremental compilation, I understand your plight, but I don't think "difficulty of implementation" should be a deciding factor on whether a feature that benefits user freedom and makes it simpler to use Zig should be in the language.

I distinctly remember Andrew making the point in a Zig roadmap stream that he is very much willing to make breaking changes to the language for the sake of implementability and optimizability. I don't get the impression that that particular aspect is really up for meaningful debate.

And FWIW, having worked on compilers, VMs, and some language design for over a decade, I think this is the correct choice. There is a reason that "sufficiently smart compiler" is a well-known fallacy in PL design circles; it almost never materializes, and designing on the assumption that it will is rarely a good bet.