blackbeam / rust-mysql-simple

Mysql client library implemented in rust.
Apache License 2.0
652 stars 145 forks source link

How to implement nested queries? (exec_drop() inside loop over query_iter() result set) #365

Open claudiofelber opened 7 months ago

claudiofelber commented 7 months ago

I have a use case where I need to iterate over the result of a query and use the row data to update the row. Here is a small example program illustrating the scenario:

use mysql::prelude::Queryable;
use mysql::{from_row, Conn};

fn main() {
    let url = "mysql://root@localhost:3306/vitrosearch_fmimport";
    let mut conn = Conn::new(url).unwrap();

    // Create a table and fill in some data
    conn.exec_drop("create temporary table test (id int not null, text varchar(10) null)", ()).unwrap();
    for n in 1..101 {
        conn.exec_drop("insert into test (id) values (?)", (n,)).unwrap();
    }

    // Select the data in the table and update each row accordingly
    let mut result = conn.query_iter("select id from test").unwrap();
    while let Some(row) = result.next() {
        let r: (i32,) = from_row(row.unwrap());
        let text = format!("{}", r.0);
        conn.exec_drop("update test set text = ? where id = ?", (r.0, text)).unwrap();
    }
}

This example cannot be compiled because the connection would be mutably borrowed two times, once when executing the select statement and a second time, when trying to execute the update statement:

error[E0499]: cannot borrow `conn` as mutable more than once at a time
  --> src\main.rs:17:9
   |
13 |     let mut result = conn.query_iter("select id from test").unwrap();
   |                      ---- first mutable borrow occurs here
14 |     while let Some(row) = result.next() {
   |                           ------ first borrow later used here
...
17 |         conn.exec_drop("update test set text = ? where id = ?", (r.0, text)).unwrap();
   |         ^^^^ second mutable borrow occurs here

Is there a possibility to implement this use case with the mysql crate? Reading all the data to memory first and then executing all the update statements after that is not an option because in the real-world use case we are not dealing with 100 records but with millions of records.

blackbeam commented 7 months ago

Hi. This is impossible due to MySql protocol limitation. Your options are: