Closed schungx closed 3 years ago
Your link has an extra underscore at the end, which does not work. If you edit the text and remove it, then it points to the comment.
@jhwgh1968 I suggest we make new conversation here. The original issue is getting way too long and sometimes it takes forever to load...
Alright. I will post a new To-Do list comment right after this, so you can link to it.
rhai
crate -- #23Dynamic::from
, for functions that return Result<_, EvalAltResult>
-- #30 Engine
, in addition to inside a Module
-- #32 rhai_fn
attribute for functions nested within modules (until this, attributes cannot be applied to them) -- #36rhai_mod
attribute for submodules nested within modules (until this, nested modules do not work) -- #40 "+"
name
attribute)#[cfg]
attributes on modules and submodules (right now it causes errors) -- #42 pub mod
is and mod
is notget=XXX
, set=XXX
, index_get
, index_set
in rhai_fn
as alternatives to name
-- #55 Position
parameter in call
to reduce function call overheadrhai_generate
to dump into an existing module instead of always creating a new module -- #56?module_init
function to support arbitrary extra behavior in module::generate()
~ -- not in 0.19.0impl
blocks with self
parameters on types~ -- not in 0.19.0Here are the current top-level code examples from the crate to give an idea of the syntax:
use rhai::plugin::*; // <-- required import for macros
#[rhai::export_module(name = "Math::Advanced")]
pub mod advanced_math {
use rhai::FLOAT;
// public constants are exported to Rhai by name
pub const MYSTIC_NUMBER: FLOAT = 42.0 as FLOAT;
// public functions are exported to Rhai, and can have attributes on them
#[rhai_fn(name = "euclidean_distance")]
pub fn distance(x1: FLOAT, y1: FLOAT, x2: FLOAT, y2: FLOAT) -> FLOAT {
((y2 - y1).abs().powf(2.0) + (x2 -x1).abs().powf(2.0)).sqrt()
}
// public submodules are recursively exported, and can have attributes via
// the 'rhai_mod' attribute
pub mod submodule {
/* inner items here */
}
}
use rhai::{EvalAltResult, FLOAT};
fn main() -> Result<(), Box<EvalAltResult>> {
let mut engine = Engine::new();
let m = rhai::exported_module!(advanced_math);
engine.load_package(m);
assert_eq!(engine.eval::<FLOAT>(
r#"import "Math::Advanced" as math;
let m = math::MYSTIC_NUMBER;
let x = math::euclidean_distance(0.0, 1.0, 0.0, m);
x"#)?, 41.0);
Ok(())
}
use rhai::plugin::*; // <-- required import for macros
#[rhai::export_fn]
pub fn distance_function(x1: FLOAT, y1: FLOAT, x2: FLOAT, y2: FLOAT) -> FLOAT {
((y2 - y1).abs().powf(2.0) + (x2 -x1).abs().powf(2.0)).sqrt()
}
use rhai::{EvalAltResult, FLOAT, Module, RegisterFn};
use rhai::module_resolvers::*;
fn main() -> Result<(), Box<EvalAltResult>> {
let mut engine = Engine::new();
engine.register_fn("get_mystic_number", || { 42 as FLOAT });
let mut m = Module::new();
rhai::register_exported_fn!(m, "euclidean_distance", distance_function);
let mut r = StaticModuleResolver::new();
r.insert("Math::Advanced".to_string(), m);
engine.set_module_resolver(Some(r));
assert_eq!(engine.eval::<FLOAT>(
r#"import "Math::Advanced" as math;
let m = get_mystic_number();
let x = math::euclidean_distance(0.0, 1.0, 0.0, m);
x"#)?, 41.0);
Ok(())
}
I also updated the code to use load_package
instead of a static module resolver -- I hope that is correct @schungx?
Yes, load_package
is the simplest for testing, because it simply converts a Module
to a PackageLibrary
.
BTW, I find that errors, especially return type conflicts when use return_raw
, are very difficult to diagnose. They do not point to any location, and the error messages are quite cryptic. I just spent hours finding out a type bug in one of the module functions.
The best way to clean these errors seem to be to copy the module out, remove #[export_module]
and all the #[rhai_fn]
, then compile it as a separate module. That'll pick out all the errors.
It would be great if #[rhai_fn]
can be put on a normal module that does not have #[export_module]
and does nothing. RIght now it is a syntax error. If I can just comment out the #[export_module]
line and get lints, then it would help coding a great deal.
BTW, I find that errors, especially return type conflicts when use
return_raw
, are very difficult to diagnose. They do not point to any location, and the error messages are quite cryptic.
This is why I have tried to focus on error handling and diagnostic tests every time I add a new error. You are probably finding the issues I was planning to find with negative testing later.
Post examples of all the ones you find, and I will try to fix them.
error[E0277]: the trait bound `utils::ImmutableString: std::convert::From<u128>` is not satisfied
|
= help: the following implementations were found:
<utils::ImmutableString as std::convert::From<&str>>
<utils::ImmutableString as std::convert::From<std::boxed::Box<std::string::String>>>
<utils::ImmutableString as std::convert::From<std::string::String>>
= note: required because of the requirements on the impl of `std::convert::Into<utils::ImmutableString>` for `u128`
= note: required because of the requirements on the impl of `std::convert::From<u128>` for `any::Dynamic`
= note: required because of the requirements on the impl of `std::convert::Into<any::Dynamic>` for `u128`
The location of the error is pointing to the first character of lib.rs
, which is not useful.
This error, or something similar, is generated whenever I made a mistake by using .into()
to convert a type into Dynamic
that doesn't support it.
For example:
Ok(x.into())
will generate this error if the type of x
is not Into<Dynamic>
.
Ok(Dynamic::from(x))
solves this problem.
Ah, I see.
I am currently working on a large refactoring (so no plugin changes please!), but I will have a commit in there that makes the error message clearer.
As @schungx noted:
OK, I tested the error diagnostics.
#[export_fn]
successfully puts out an error forreturn_raw
which does not returnResult
.#[export_fn]
functions have good error diagnostics.
#[export_module]
functions do not highlightreturn_raw
errors, not do they highlight interior errors.Therefore, it seems like we're still missing error position info for modules.
I think this means I should probably start shaking the trees harder to see what fruit falls. I will start with the return_raw
reporting, and try some things.
@jhwgh1968 BTW... is it possible for #[set_exported_fn]
to take a closure instead of a #[export_fn]
fn?
The code base right now is a bit littered with a number of #[export_fn]
simple functions just because a function is needed.
Since you're doing generating a module, I'm not sure if you can splice it in at the registration call site...
BTW, just fyi... I rerun the benchmarks and it seems plugins do not have any material impacts. Good news!
Still, I would suggest removing the Position
parameter in call
. Even though it is only 32 bits, still one fewer argument is one fewer argument...
I'll put an item in the TODO...
Also, I'll start on the plugins documentation.
BTW, just fyi... I rerun the benchmarks and it seems plugins do not have any material impacts. Good news!
Indeed it is! :tada:
Still, I would suggest removing the
Position
parameter incall
. Even though it is only 32 bits, still one fewer argument is one fewer argument...
The Position
parameter was something I missed when I turned parameter checking from a runtime error (which requires it) into a debug_assert!
. That should be an easy fix.
BTW... is it possible for
#[set_exported_fn]
to take a closure instead of a#[export_fn]
fn?
Unfortunately. no. You should think of set_exported_fn!
as the equivalent of this code:
macro_rules set_exported_fn! {
(module:$ident, export_name:$literal, fn_path:$path) => {{
macro_let!($generated_module_path = {
let fn_name = module_path.segments.pop();
let rhai_fn_module = format!("rhai_fn_{}", fn_name);
module_path_segments.join(rhai_fn_module);
});
macro_let!($input_types_fn = concat!($generated_module_path, "input_token_types"));
macro_let!($callable_token_fn = concat!($generated_module_path, "token_callable"));
$module.set_fn($export_name, FnAccess::Public,
$input_types_fn().as_ref(), // array of argument types
$callable_token_fn()); // the token implementing PluginFunction
}}
}
In other words, it simply navigates to the generated module, and calls the functions defined therein to return the list of arguments and the PluginFunction
to add.
The only way to get that module generated in its current form requires an fn
.
The code base right now is a bit littered with a number of
#[export_fn]
simple functions just because a function is needed.
I am hoping that can be cleaned up once I allow including a function in a module "remotely". Then, it should be possible to generate those functions with macros, not tag them as #[export_fn]
, and then pull them into the module tagged with #[export_module]
.
BTW, I just merged in the first draft of the Plugins section in the documentation.
Example of error:
macro_rules! gen_cmp_functions {
($($arg_type:ident),+) => {
mod test { $(pub mod $arg_type {
use super::super::*;
#[export_module]
pub mod functions {
#[rhai_fn(name = "<")]
pub fn lt(x: $arg_type, y: $arg_type) -> bool {
x
}
}
})* }
};
}
gen_cmp_functions!(i8);
mismatched types
expected `bool` because of return type rustc(E0308) [1, 1]
The same code without macro_rules!
will highlight the error just fine.
@jhwgh1968 Actually, I feel that plugins
is ready to merge into master
. What do you think?
Or leave it on a branch just in case we need to do bug fixes in master. Merge when we're ready to release 0.19.0.
The same code without
macro_rules!
will highlight the error just fine.
Hmm... I think that might be the order of how macros are invoked vs procedural macros. I will look into it.
Actually, I feel that
plugins
is ready to merge intomaster
. What do you think?
I'd say merge the current branch to master, but keep the branch going a little longer.
The merge to master (and then to upstream master) allows us to discover integration issues, and lets other people give feedback on what we have so far. I think it is definitely a "beta version" now, so we should do that.
However, I'd still like the plugins
branch until I have all the features done. The only ones left:
mod_init
function, i.e. a user function which can change the generated module in an #[export_fn]
#[export_type]
macro for impl
blocks(I'm considering leaving the last one out of the 0.19 release, because it's a bit of a mess right now.)
Everything else on the to-do list is fixes or usability improvements (such as the items for extern imports or generate
flexibility). Those can be individual PRs, so it doesn't matter as much.
Okay, I am very confused. Checking out the current plugins branch, I get 32 failed doc tests for Rhai if I use the only_i32
feature. This is both with the nightly and stable compilers, even though the CI for commit 91b4f8a6
is all green!
I'll try opening my PR as-is, and see if CI passes anyway...
I get 32 failed doc tests for Rhai if I use the
only_i32
feature.
Which tests failed? Can you send me a log?
It seems that export_prefix = "..."
checks the registered name of the function, instead of the actual name of the function. For example, #[rhai_fn(name = "foo")] fn bar()...
will match foo
instead of bar
.
You should match the actual function name instead because, if I put a rhai_fn(name = "...")
on a function, most likely I would like to export it! export_prefix
is only useful when I don't need to rename functions and I want to only include some of them.
You should match the actual function name instead because, if I put a
rhai_fn(name = "...")
on a function, most likely I would like to export it!export_prefix
is only useful when I don't need to rename functions and I want to only include some of them.
You are absolutely right. I went back and forth on this for a while, and without thinking, implemented the wrong way! I will fix it.
Which tests failed? Can you send me a log?
Attached is the log of failures on 1.46.0 stable (nightly is very similar if not identical): cargo-test.txt
It seems that changing INT
to i32
instead of i64
is causing a lot issues with casting and method matching.
Actually, I find that it probably would be more useful to have a skip_prefix
and skip_postfix
versions. I can then put functions like pub get_XXX_internal
and have sip_postfix = "_internal"
if I need that function elsewhere. Or skip_prefix = "_"
to skip all functions beginning with underscores...
export_prefix
is less useful because it is unlikely that an entire API will begin with the same text prefix...
Attached is the log of failures on 1.46.0 stable (nightly is very similar if not identical): cargo-test.txt
Now I see it. These are expected. All doc comments use i64
instead of INT
for simplicity purposes. They all fail with only_i32
. Therefore, the CI is run with --tests --features only_i32
to exclude doc comments.
Ah, I see. Thanks for the clarification.
Actually, I find that it probably would be more useful to have a
skip_prefix
andskip_postfix
versions.
I specifically chose affirmative over negative (export when the match is true not false) and prefix over suffix with the end-user in mind.
I thought it would be simpler, it would better align with Rust code people already write, and it would work with the way a user would build modules.
I could see an argument for having an export_suffix
, since even Rust stdlib has unchecked_add
and saturating_add
rather than add_saturating
and add_unchecked
. But a skip_suffix
where the default is to export everything? I don't see that as being the common case. export_all
is the way to say, "I have a trivial type, export everything", and then opting in to specific skips.
Honestly, I wasn't expecting very much use of export_prefix
until I implemented impl
block exports. That's the case where scoping based on pub
(the current behavior) breaks down. For example:
pub struct MyRustObject { ... }
#[export_impl(export_prefix="rhai")]
impl MyRustObject {
// marked pub due to the crate API, but Rhai shouldn't use it
pub fn bar(&self) -> &Bar { ... }
// this is the version for Rhai
pub fn rhai_convert_to_bar(&self) -> Bar {
self.bar().clone()
}
}
In my opinion, the only place conditional skipping of names would be useful is the Rhai core library.
To be clear, I'm still looking for ways to use my macros that are less awkward in Rhai core before 0.20. However, I consider it a very special case that should not drive feature development.
Well, in your example, you're gonna need a rhai_fn(name = "xxx")
attribute anyway to remap the function name:
pub struct MyRustObject { ... }
#[export_impl(export_prefix="rhai")]
impl MyRustObject {
#[rhai_fn(skip)]
pub fn bar(&self) -> &Bar { ... }
#[rhai_fn(name = "bar")]
pub fn rhai_convert_to_bar(&self) -> Bar {
self.bar().clone()
}
}
Maybe the prefix is always removed from the function name to get to the registered name?
pub struct MyRustObject { ... }
#[export_impl(export_prefix="rhai_")]
impl MyRustObject {
// marked pub due to the crate API, but Rhai shouldn't use it
pub fn bar(&self) -> &Bar { ... }
// this is the version for Rhai -> automatically rename to "convert_to_bar"?
pub fn rhai_convert_to_bar(&self) -> Bar {
self.bar().clone()
}
}
Maybe the prefix is always removed from the function name to get to the registered name?
I rather like the explicit naming of things that will be exported, and it seems just a little "too automatic". However, you do have a point that they are probably named sub-optimally for almost any usage without a rename.
I feel like this is one of several questions best answered by implementing export_impl
and seeing how people use it. That, unfortunately, will be a while (and I don't think should hold up version 0.20).
I spent some time today getting another feature mostly done: #[rhai_mod(flatten)]
. Basically, it does what lib.combine_flatten
does, except it reorganizes and simplifies the generated code, rather than doing the merge at runtime.
While I think it might be a small speed win (in theory), I was actually hoping that it would allow some simplification of the Rhai stdlib. I saw several cases where libraries are built, piece by piece, and then added to the actual package one at a time. Perhaps I could do better by flatten
-ing them down into a single structure?
After spending the last 45 minutes reviewing most of src/packages
, I am surprised how little it would help. Too much stuff has Rust 1.0 macros to handle repetition.
To give you an idea of how I imagine end-users of the language using my macros, take a look at src/packages/time_basic.rs
:
def_package!(crate:BasicTimePackage:"Basic timing utilities.", lib, {
// Register date/time functions -- this one line does the whole job
lib.combine_flatten(exported_module!(time_functions));
});
#[export_module]
mod time_functions {
#[inline(always)]
pub fn timestamp() -> Instant {
Instant::now()
}
#[rhai_fn(return_raw)]
pub fn elapsed(timestamp: &mut Instant) -> Result<Dynamic, Box<EvalAltResult>> {
/* ... */
}
#[rhai_fn(get = "elapsed", return_raw)]
#[inline(always)]
pub fn elapsed_prop(timestamp: &mut Instant) -> Result<Dynamic, Box<EvalAltResult>> {
/* ... */
}
#[rhai_fn(return_raw, name = "-")]
pub fn time_diff(ts1: Instant, ts2: Instant) -> Result<Dynamic, Box<EvalAltResult>> {
/* ... */
}
/* ... more functions, all taking an Instant as their first argument ... */
There is one module defined, and tagged with #[export_module]
. It contains the entire API for a new type: Rust's Instant
. The light sprinkling of my macros keep things readable, and save the boilerplate I found difficult. (Tons of register_fn
and closure definitions that would otherwise be in that def_package!
, if I understand "the old way" correctly.)
Once I wrote flatten
, I thought: why not turn all the the integer definitions you have -- with their many #[cfg]
blocks -- into that structure? Unroll the macros, put in type-based submodules for int_u8
, int_u16
, etc., and then flatten
them all to generate the "arithmetic" module all at once!
However, when I actually sat down and studied the code, I can see why you did it that way. Not only is the code virtually identical between all the types, the differences are best grouped by their #[cfg]
features. It is not the type being defined by the operators (like Instant
), but the operators applying to types broadly.
I still wonder about some of the trig_functions
in src/package/math_basic.rs
, or the debug functions defined in string_basic.rs
. But if there is a way to reorganize them, it's not obvious. I have mostly given up ideas of improving what you already have.
As a result, I am undecided whether finishingflatten
is even worth it. It might be for some libraries, but I'd like to write some end-user code of my own with Rhai before I go to all that trouble. (The tests for it will be huge, and there is a high potential for bugs.)
Instead, I'll probably work on getters and setters next.
Any thoughts on my rambling?
While I think it might be a small speed win (in theory), I was actually hoping that it would allow some simplification of the Rhai stdlib. I saw several cases where libraries are built, piece by piece, and then added to the actual package one at a time. Perhaps I could do better by
flatten
-ing them down into a single structure?
Well, module construction should not happen frequently, so it is usually not a bottleneck. Auto-flattening is nice, but not critical. That's because the user can always call combine_flatten
to recursively add the functions. All you'll be saving is the construction of a few sub-module types, and it is not a problem for something not on the hot path.
I recommend skipping it for now.
Instead, auto getters/setters/indexers will be a huge feature.
One thing you may consider is that sometimes the same function is used to register multiple Rhai functions. For example, the user may have an +
operator, and an add
function, with +
delegating to add
. Also, it is always useful to have functions plus properties on the same field. For example: foo.bar()
and foo.bar
both return the same thing.
Right now, I must define two functions, with one call the other. Some way to automate this? Maybe allowing rhai_fn(name = "...","...","...")
with multiple names? Or, #[rhai_fn(name = "prop", name = "->", get = "prop")] fn get_prop()...
will put prop
, ->
and get$prop
into name
which is turned into a Vec
.
I've put in a PR that supports multiple name
parameters.
I've added your ID to the authors
of Rhai since it is a pretty major addition you have made on the project.
Would you like jhwgh1968
or your full name?
PS I've also removed my name from codegen
since I really did not work on it other than using it.
Since my committer full name is a "Nom De Plume" rather than a real name (it's a long story), I think jhwgh1968
would be better.
As to the authorship of the codegen
crate, I will send you an e-mail about that.
Do you think 0.19.0
is ready to launch?
Do you think
0.19.0
is ready to launch?
I would like to go down the checklist one more time, and perhaps discuss one or two of the items. But unless one of those turns into a blocker in your mind, I think it's ready!
As to the authorship of the
codegen
crate, I will send you an e-mail about that.
Just so it's on the record, our discussion via e-mail ended up here: I'll stay primary author of the codegen
crate, and the pointer to this repo will still point people where to file issues.
Just so it's on the record, our discussion via e-mail ended up here: I'll stay primary author of the
codegen
crate, and the pointer to this repo will still point people where to file issues.
OK. I take that you mean the repository
field in codegen/Cargo.toml
should point to the parent repo (as it is right now), correct?
OK. I take that you mean the
repository
field incodegen/Cargo.toml
should point to the parent repo (as it is right now), correct?
Correct.
Sorry, accidentally merged the latest PR. But it looks OK, except for a feature test that I'll fix.
I think I might have found what went wrong. I did merges on my plugins_dev
. I guess it is probably a bad idea to do a rebase after merging... It probably replays everything on those merges again...
So if I just delete the branch after merge, and then reopen a new one... as long as I haven't merged that one, I can freely rebase, I suppose.
Your statement is correct. The workflow is:
plugins_dev
to plugins
tip.dev
branch.dev
branch.plugins_dev
to plugins
, because they should now be equal except for the last merge commit you did.That is the workflow I use. The only difference is, I create "feature branches" for one thing at a time, and delete them once they merge.
As noted in #58, some stuff did get reverted. I will fix it.
On second thought, I think you're right. It ended up merged okay.
I'll add a second PR for a rustfmt CI job, though.
Pull
plugins_dev
toplugins
, because they should now be equal except for the last merge commit you did.
If I do a rebase
afterwards, will it screw up the root of the branch? Since it'll be moving the branch to the current head, which is after the previous merge...
It shouldn't.
The first step of the rebase command is to figure out where to put it. To see the commit it will decide on, you can run this yourself: git merge-base plugins plugins_dev
.
If the commit it shows is the tip of the plugins branch, then rebase
will do the right thing. If the commit is below the tip somewhere in the middle, then you will get into trouble.
If you do a rebase onto a branch which is ahead, it will point to the head of your current branch. That means there is nothing above it to rebase, and the branch will just fast forward.
See if this longer explanation helps.
If the commit it shows is the tip of the plugins branch, then
rebase
will do the right thing. If the commit is below the tip somewhere in the middle, then you will get into trouble.
Ah! Gotcha!
BTW there is a diag_test
directory. Is it used?
I suppose we are just about ready to put out 0.19.0
...
BTW there is a
diag_test
directory. Is it used?
That was an experiment at a previous point in development, and was committed by mistake. Please do remove it.
I suppose we are just about ready to put out
0.19.0
...
Very close. I will double check my list when I get time this week.
On the current list, I checked off a couple more things to bring it up to date. At the very least, I think we can merge the plugins
branch to master
now.
Could you please double check, and make sure you agree all the items are done? In particular, that your PR handled the module_generate
question, and the current state of get/set/index is good enough to ship.
The two items left are:
A module_init
function. I put this in because I suspected this could solve some of your special cases. However, it seems the Rhai standard library is pretty good without it, so I would drop this unless you see value.
Rhai documentation. I have seen a lot of changes from you go by, and I would like to do one quick review of them before I check that box. Is the current "nightly" book updated?
I will double check to make sure I haven't missed anything later this week.
Is the current "nightly" book updated?
Not really... I manually update the vnext
book once in a while. It might be easier to run mdbook serve
on the latest sources to check the current version...
@jhwgh1968 I think the nightly compiler has some new changes. Types wrapped up in repeat blocks in macros now reside inside Group(TypeGroup { ... })
instead of being a simple Path
. This breaks detection of &mut
parameter types.
I've put in a simple flatten_groups
function to by-pass all these grouping in rhai_module.rs
.
I'm glad you've got a fix for it.
Meanwhile, after a long thread on the Rust Forums, apparently the lint I suppressed was not saying what I thought it was. I have removed an extra clone()
from the generated code, and will have a PR in a moment.
Tracking Issue