Closed martpie closed 3 weeks ago
I haven't spent as much time on the sqlite
side (I mostly do server with postgres), so there are doubtless some DX things that could be improved.
I believe that's expected you have to create the file. I'd welcome a small PR to update docs regarding that.
The solution I've seen for migrations is to statically create the migrations, and then use include_dir
or rust_embed
to include those assets in your binary. Then you can run those migrations during program startup. The effect is having SQL that defines table creation and indices, but you shouldn't have to write those files yourself (although ormlite doesn't generate indices automatically yet).
Let me know if that helps!
Then you can run those migrations during program startup.
Can you share how? Do you have a code pointer?
The effect is having SQL that defines table creation and indices, but you shouldn't have to write those files yourself
What do you mean? Is there a way to get the CREATE TABLE
statement just with my stricts that derives Model
?
Yeah, if you don't want to use the ormlite cli, you can check out the code: there's a method something like autogenerate
in ormlite
cli that shows code for how to generate the CREATE TABLE statements. Basically, you take the schema generated by analyzing the Rust structs, and then generate a migration from an empty schema to the schema represented by the structs.
Running migrations during startup is pretty trivial. They're plain SQL statements, so you'd just do something like:
for statement in migration.statements {
query(statement).execute(&mut conn).await?;
}
I have just finished migrating my project to SQLite using ormlite
, so far, it's been quite a smooth experience (integrating with JSON was surprisingly smooth, great job!), but there are a couple of gotchas.
where_in
would be absolutely awesome to match a track of IDs WHERE ... IN (..., ..., ...)
push_bind
/ bind_all
could improve the situation here maybeMy solution:
let placeholders = track_ids.iter().map(|_| "?").collect::<Vec<_>>().join(", ");
let where_statement = format!("id IN ({})", placeholders);
let mut query_builder = Track::select().dangerous_where(&where_statement);
for id in track_ids {
query_builder = query_builder.bind(id);
}
let mut tracks: Vec<Track> = query_builder.fetch_all(&mut self.connection).await?;
#[ormlite( ... )]
pathbuf
to serialize/deserialize paths would be awesomeunique
would be quite useful too_id
key seems not supported, even when specifying a primary_key
on this field (primary key not found error something...
)geekorm
in that regard is very nice: MyStruct::create_table(&connection)
(the list above is not me complaining, I'm just sharing candid feedback ;) )
You should have a look at geekorm
for inspiration (you both solve the same niche), they have a lot of good ideas, like select_by_{field}
helpers (via a feature), and MyStruct::create_table()
. Unfortunately, thread-safe rusqlite
is just not do-able.
I'm sticking with ormlite
, really happy with it :) Maybe I can fix some of those things myself if I find the time, but maybe you'll identify low hanging fruits :D
For people who want to read code, see this PR for an integration with Tauri (thread-safe support and everything): https://github.com/martpie/museeks/pull/795/files (just look at the rust files only)
I'll close this issue, because it's a weird mix of many small and unrelated issues.
Hi again 😄
Two challenges I am facing when trying to use
ormlite
for an embedded database:SQLite file does not get automatically created
maybe I missed something, or maybe it's an issue with
sqlx
, but I had to:otherwise
SqliteConnection::connect
would fail to open the database. If this is expected, maybe adding a bit about that in the doc could help, otherwise, do you know what I could be doing wrong?Running the table migrations manually
migrations require using the CLI. For an embedded app, it's not ok to ask users to run a CLI. Is it possible to run the tables creation (and eventually indices setup) given some
Model
structs?If not, am I just better creating those tables and indices manually myself using raw SQL (not sure how to do that), or am I better using another
sqlx
-like migration system?