Open andrewrk opened 3 days ago
It would be appropriate to use this in the deinit function for example.
Since it's the standard for deinit
to set a struct to undefined
after the free, why not just do:
fn deinit(foo: *Foo, gpa: std.mem.Allocator) {
gpa.free(foo.b);
gpa.free(foo.c);
foo.* = .{
.a = undefined,
.b = undefined,
.c = undefined,
};
}
This will then "notify" you in the deinit
function when you add a new field to Foo
.
Since it's the standard for
deinit
to set a struct toundefined
after the free, why not just do:
for something that the compiler should ideally be doing, i really hope this convention eventuallygoes away (to clarify the above statement im not referring to like deconstructors, more setting an entire stack frame's space to undefined upon return) on top of that its a huge barrier to constness since setting it to undefined requires taking a mutable pointer
Will also add, the pattern of setting each field to undefined only works if none of the fields have default values, which appears to be the main advantage of this use
function.
I think it was @jamii who wanted exhaustiveness checks? Though, I did stir a storm in a teacup when, upon my suggestion, Rust started emitting more warning for unused fields, so I allow myself to weigh in.
As suggested, it honestly gives "the cure is worse than disease" vibes to me. It is cool that it works, but it reads as a too clever a trick, while not actually being all that general? In particular, you'd often want to have dependencies between the fields. For example, if a
is the allocator, we'd have to write:
use(foo).* = .{
.gpa = {},
.b = {
foo.gpa.free(foo.b);
},
.c = {
foo.gpa.destroy(foo.c);
},
};
which isn't quite elegant. And it gets worse if a itself needs some handling in drop (eg, reporting memory usage):
use(foo).* = .{
.gpa = {},
.b = {
foo.gpa.free(foo.b);
},
.c = {
foo.gpa.destroy(foo.c);
},
};
foo.gpa.reportMemoryUsage();
For ergonomics of these use-cases, I'd maybe prefer unpacking syntax:
const gpa, const b, const c = std.meta.asTuple(foo);
This of course suffers the field-order dependency, but perhaps we can do
const gpa, const b, const c = std.meta.unpackExhaustive(foo, .{.gpa, .b, .c});
?
Can't say I am a fan of either solution, to be honest. It technically gets the job done, but in a maliciously compliant way :-)
I don't have a strong opinion about this function one way or another. On the one hand it seems a bit hacky (@matklad has covered that well), on the other hand, using it would be optional and it could solve problems.
I do have a collections of issues relating to validating Zig source code, which I'm adding this to. Validation appears to me to be an out-of-band domain which might call for a dedicated solution. Validations which can be ensured with comptime magic are a subset of valuable validations, and some of the ones which can be, might be better solved with a different tool. I suspect this is one of those.
Just a note: the destructuring solution isn't applicable to pinned structures @matklad.
I think it was @jamii who wanted exhaustiveness checks?
I definitely would like them, but almost always in places where I'm not actually destroying the original value so use
wouldn't help me. Eg:
switch (expr_data) {
...
.union_init => |union_init| {
const child_dest = switch (dest) {
.value_at => |ptr| wir.Destination{
.value_at = c.box(wir.Walue{ .add = .{
.walue = ptr,
.offset = union_init.repr.tagSizeOf(),
} }),
},
else => dest,
};
const arg = try genExpr(c, f, tir_f, child_dest);
return .{ .@"union" = .{
.repr = union_init.repr,
.tag = union_init.tag,
.value = c.box(arg),
} };
},
...
}
If I add a new case to the ExprData
union, the compiler will warn me that I need to handle it here. But if I add a new field to the UnionInit
struct, I'm on my own. In rust I would exhaustively destructure the struct and get a warning if I missed a field, or if I didn't use a field that wasn't explicitly discarded.
For the kind of code I often write, exhaustiveness checking catches a lot of bugs and makes changes less scary. Of the features that could realistically be added to zig at this stage, this is the one I miss the most.
I think it was @jamii who wanted exhaustiveness checks?
I definitely would like them, but almost always in places where I'm not actually destroying the original value so
use
wouldn't help me.
As I read the OP code example, there is no direct link to destruction.
All it does is provide a pointer to a struct
with void
fields for each real field of the argument,
we're really just re-using the compiler's .{}
coercion struct field exhaustiveness check.
If you just want a reminder when the fields change, use(x).* = .{.a = {}, .b = {}};
would be a no-op that'd allowed anywhere, and as often as you'd like (since it's a no-op).
Djikstra once said that the best formal specification for inverting a matrix is "invert a matrix". The best solution to this class of problem is a way to say "all fields of foo
shall be assigned in this block".
std.meta.use
is an aide here but not a solution. Consider a create
function:
pub fn create(allocator: Allocator) !*Stooges {
var stooges = try allocator.create(Stooges);
std.meta.use(stooges).* = .{
.moe = {
stooges.moe = "nyuk nyuk";
},
.larry = {
stooges.moe = "why I oughta!";
},
.curly = {
stooges.curly = "This is gettin' on my noives!";
},
};
return stooges;
}
If you add Shemp later, and forget about him (everyone forgets about Shemp), this function has you covered. But .larry
is still undefined.
Easier to spot? Sure, but we're still looking at code here, when we could be validating it.
If you only removed the defaults without making the fields void this could be used with minimal changes:
const std = @import("std");
const expect = std.testing.expect;
fn DefaultsRemoved(T: type) type {
const info = @typeInfo(T).@"struct";
var new_fields = info.fields[0..info.fields.len].*;
for (&new_fields) |*field| {
field.default_value = null;
}
return @Type(.{ .@"struct" = .{
.layout = info.layout,
.fields = &new_fields,
.decls = &.{},
.is_tuple = info.is_tuple,
} });
}
fn use(x: anytype) *DefaultsRemoved(@TypeOf(x.*)) {
return @ptrCast(x);
}
const Foo = struct {
a: i32 = 1,
b: []u8 = "",
c: *f64,
fn create(gpa: std.mem.Allocator, b: []const u8) !*Foo {
const foo = try gpa.create(Foo);
use(foo).* = .{
.a = 1234,
.b = try gpa.dupe(u8, b),
.c = try gpa.create(f64),
};
return foo;
}
};
test "example" {
const gpa = std.testing.allocator;
// "Normal" way to initialize
// var foo: Foo = .{
// .a = 1234,
// .b = try gpa.alloc(u8, 100),
// .c = try gpa.create(f64),
// };
// Validating that all fields are set
var foo: Foo = undefined;
use(&foo).* = .{
.a = 1234,
.b = try gpa.alloc(u8, 100),
.c = try gpa.create(f64),
};
}
This works better when using an init
or a create
since to make the resulting struct const
you need to wrap the initialization in a block.
That's a solution to a problem that should not have existed in the first place. If there's ever a danger of forgetting to initialize a field, then don't give it a default init. Most people should not use the default init feature. Zig originally did not have it with no plan to add it. I knew when I added it that people would not understand the nuance of how to use default initialized struct fields, and here we are. There is unfortunately still a use case for default struct field inits which is adding a field while maintaining backwards compatibility in the API, which is why the feature exists. But most people in general should simply never use default struct field inits.
Oh, right :facepalm:
Thoughts?
cc @matklad this seems relevant to your interests.