Closed bbigras closed 2 years ago
it is probably because i need to update odbc-api to the latest version for this. Give me a little time and I should complete that.
please let me know if you still have the issue. If you do we can reopen this issue to see what else it could be.
I still have it with 20f3c54e9d426a11f4302b6448802eaac763fed6.
Maybe the problem is with odbc_api
, I'll try using it directly.
thread 'r2d2-worker-0' panicked at 'called `Result::unwrap()` on an `Err` value: FailedSettingConnectionPooling', /home/bbigras/.cargo/git/checkouts/r2d2_odbc_api-351ead4e98789d8e/20f3c54/src/lib.rs:35:81
stack backtrace:
0: rust_begin_unwind
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/std/src/panicking.rs:577:5
1: core::panicking::panic_fmt
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/core/src/panicking.rs:110:14
2: core::result::unwrap_failed
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/core/src/result.rs:1690:5
3: core::result::Result<T,E>::unwrap
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/core/src/result.rs:1018:23
4: <r2d2_odbc_api::ENV as core::ops::deref::Deref>::deref::__static_ref_initialize
at /home/bbigras/.cargo/git/checkouts/r2d2_odbc_api-351ead4e98789d8e/20f3c54/src/lib.rs:35:9
5: core::ops::function::FnOnce::call_once
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/core/src/ops/function.rs:227:5
6: lazy_static::lazy::Lazy<T>::get::{{closure}}
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/inline_lazy.rs:31:29
7: std::sync::once::Once::call_once::{{closure}}
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/std/src/sync/once.rs:269:41
8: std::sync::once::Once::call_inner
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/std/src/sync/once.rs:426:21
9: std::sync::once::Once::call_once
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/std/src/sync/once.rs:269:9
10: lazy_static::lazy::Lazy<T>::get
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/inline_lazy.rs:30:9
11: <r2d2_odbc_api::ENV as core::ops::deref::Deref>::deref::__stability
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/lib.rs:142:21
12: <r2d2_odbc_api::ENV as core::ops::deref::Deref>::deref
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/lib.rs:144:17
13: <r2d2_odbc_api::ODBCConnectionManager as r2d2::ManageConnection>::connect
at /home/bbigras/.cargo/git/checkouts/r2d2_odbc_api-351ead4e98789d8e/20f3c54/src/lib.rs:106:20
14: r2d2::add_connection::inner::{{closure}}
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/r2d2-0.8.9/src/lib.rs:241:24
15: scheduled_thread_pool::thunk::Thunk<(),R>::new::{{closure}}
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/scheduled-thread-pool-0.2.5/src/thunk.rs:20:35
16: <F as scheduled_thread_pool::thunk::Invoke<A,R>>::invoke
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/scheduled-thread-pool-0.2.5/src/thunk.rs:50:9
17: scheduled_thread_pool::thunk::Thunk<A,R>::invoke
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/scheduled-thread-pool-0.2.5/src/thunk.rs:35:9
18: scheduled_thread_pool::Worker::run_job
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/scheduled-thread-pool-0.2.5/src/lib.rs:364:33
19: scheduled_thread_pool::Worker::run::{{closure}}
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/scheduled-thread-pool-0.2.5/src/lib.rs:326:61
20: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/core/src/panic/unwind_safe.rs:271:9
21: std::panicking::try::do_call
at /rustc/9ad5d82f822b3cb67637f11be2e65c5662b66ec0/library/std/src/panicking.rs:485:40
22: __rust_try
Ok also if you can give me the connection string your using you can replace any passwords or actual usernames with "default" Id like to see what it looks like. Also maybe what OS you are trying this on.
@bbigras if your on Linux or OS-X you might need to install http://www.unixodbc.org/
My program was working with r2d2_odbc but I'm actually using libiodbc (the driver requires iodbc).
DRIVER=wd230hfo64.so;Server Name=my_ip;Server Port=4900;Database=my_database;UID=user;PWD=default
.
I opened an issue on odbc-api to ask if iodbc is supported.
@pacman82 Any idea on this since I think it is possibly https://crates.io/crates/odbc-sys and not odbc-api that is the issue here. since odbc-api is just passing Environment::set_connection_pooling(AttrConnectionPooling::DriverAware).unwrap(); to odbc-sys which it is failing in someway and causing a cascade effect.
@bbigras Could you try the other odbc driver to see if it works with your stuff?
@bbigras Could you try the other odbc driver to see if it works with your stuff?
Do you mean https://github.com/Koka/odbc-rs or try unixodbc just in case it works?
odbc-api uses odbc-sys and not odbc-rs which is newer and not complete enough atm. The unixodbc is another driver which i believe odbc-sys uses on linux and mac. So give the unixodbc a try and see if it will work with your stuff.
@bbigras sudo apt-get install unixodbc-dev
I'll try unixodbc. Note that I don't have the source of wd230hfo64.so
, only the .so file, and if I run ldd
on it, I see that it requires libiodbcinst.so.2
.
with unixodbc and odbc-api directly, I got:
Jan 20 12:18:49.110 DEBUG odbc_api::environment: ODBC Environment created.
Jan 20 12:18:49.123 WARN odbc_api::handles::logging: State: 0000, Native error: 0, Message: I
Error: ODBC emitted an error calling 'SQLDriverConnectW':
State: 0000, Native error: 0, Message: I
odbc-api with iodbc I get:
Jan 20 12:22:29.631 DEBUG odbc_api::environment: ODBC Environment created.
Jan 20 12:22:29.632 WARN odbc_api::handles::logging: State: HY0, Native error: 50, Message:
Error: ODBC emitted an error calling 'SQLSetEnvAttr':
State: HY0, Native error: 50, Message:
It's trying to read odbc.ini
which I don't think I have. but maybe it doesn't matter.
I'm using:
let environment = odbc_api::Environment::new()?;
let mut conn = environment.connect_with_connection_string(&conn_str)?;
yeah both error out I think the older OBDC is what you want atm till they can fix this issue. so r2d2-odbc. you can try the odbc-rs if it works i can make a r2d2 translation layer for it.
@bbigras looks like @pacman82 commented to your request https://github.com/pacman82/odbc-api/issues/148#issuecomment-1017788413
Thanks!
Btw, odbc-rs is abandonned. That's why I wanted to migrate to odbc-api. I can continue using odbc-rs for now for there's no rush but eventually I'll need to replace it.
Hi, here is my understanding of what is happening:
iodbc
fails then activating the ODBC internal connection pooling. Most likely with one of the status codes described here: https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlsetenvattr-function?view=sql-server-ver15
Note that these status codes have 5 letters, thereas we see here only 3. This is most likely due to iodbc not implementing the wide ODBC function calls correctly (the functions ending in W). Not a suprise as these are dominant on windows systems and iodbc
is to my knowledge only running on OS-X. This is also a major difference between odbc-api
and odbc-rs
. odbc-api
uses currently the wide function calls in order to avoid all kinds of encoding errors on windows systems there UTF-8 still is not the default. unix-odbc
the only other tested driver manager so far has handled these fine.
However the original environment error is independent of that. Anybodys guess which error it is supposed to be, but my (hypothetical) money would be on HY024
invalid attribute value. Driver aware connection pooling is most likely simply not implemented in iodbc. odbc-rs
never exposed this functionality at all, so my suspicsion is that r2d2-odbc
just cached the connections itself without relying on odbc
. Going for manual caching in case the driver manager reports an error in r2d2-odbc-api
would be a way around that.
After that works we may (who knowns) hit issues with the poor widestring support of iodbc
, but I could probably help with that within the odbc-api
crate. As soon as UTF-8 is the default on all windows systems I wanted to switch to narrow functions anyway.
unixOdbc
should work fine. Note that it fails later after the Connection pool and the environment is initialized, then creating the connection. Normally the driver would provide us with a good error of why the connection failed, but it is unable to do so here. Most likely the driver so could not be loaded by unix-odbc. This is an installation issue, and most likely some misconfiguration. Anything goes from 32Bit to 64Bit mismatch to it just can't find the right .so
file. Yet if there is a proper ODBC driver tested with unixODBC for your data source I am very confident that it will work with odbc-api
.
Cheers, Markus
@bbigras If support is a concern, maybe consider migrating away from iodbc
since it has stopped being supported. This is of course only an option if you have an ODBC driver for your data source which works with one of the supported Driver Managers.
iodbc is to my knowledge only running on OS-X.
I'm using iodbc on Linux in prod right now. With the abandoned odbc-rs crate, with r2d2-odbc
.
@bbigras If support is a concern, maybe consider migrating away from
iodbc
since it has stopped being supported. This is of course only an option if you have an ODBC driver for your data source which works with one of the supported Driver Managers.
Yeah, I can't. The driver only supports iodbc.
If you need to use iodbc
and iodbc
does not implement connection pooling, at least not driver aware connection pooling we could:
iodbc
.Not doing connection pooling at all, is of course not the point of this crate, but depending on the use case it might be a way out.
Thank you @pacman82 but yeah originally we need a way to properly send/sync for R2D2 to work hence why we default to the pooling here to make sure it works correctly on things like Rocket. But apparently even without all the default pooling stuff he could not get it to work correctly. I think the issue is his ODBC is to far outdated or incorrectly setup and probably should plan on finding an alternative. @bbigras what is this driver for exactly. What database or whatnot is this?
It's for HFSQL, a crappy sql server used by windev applications. I only have a french URL https://pcsoft.fr/st/telec/modules-communs-23/wx23_42u.htm for the driver.
Is there a possibility that whatever your using HFSQL for that you could replace it with Postgres or another SQL database?
I will leave this issue Open as a reminder so if @pacman82 makes any major code breaking changes or a way to fix this that needs to be implemented here then I can add it in Later.
Nope. It's for a software we use at work. We don't have the sources. I wish they would switch it to postgresq, or even mssql.
Where is a bigger issue with getting odbc-api
to work with iodbc
at all. <See: https://github.com/pacman82/odbc-api/issues/148>
Once that is done wether @bbigras implements connection pooling directly in his application, or you do it for him here in this crate up to your discretion. I would however always try to use the pooling provided by the driver manager, since it is the only place there it can be done optimally with driver support.
yeah that can be a issue as that means id need to make it sync/send capable.
yeah that can be a issue as that means id need to make it sync/send capable.
Connection's can't really be sync without wrapping them in a Mutex. At least diagnostic error messages might end up on the wrong thread. I also remember that odbc-rs
test suite had to be executed sequentially in order to avoid race conditions. Send
however is actually in line with the ODBC standard, but not all drivers agree yet.
yeah R2D2 currently works with promote_to_send since you made that change last to this library. So it is probably OK. The thing I am wondering is do i really need to make it driver aware if R2D2 is just having it create a new connection each instance it is called? OR does the back-end ODBC driver generate a pool automatically for these connections being made? @pacman82
If connection pooling is enabled, the driver manger creates a pool for us. It is also driver aware, which means it uses the driver to decide whether it is cheaper to reuse an existing connection from the pool, or create a new one. This is only relevant if the connections are configured differently (i.e. different connection strings)
This issue is no longer blocked upstream. odbc-api
now can create connections to iodbc
data sources without workarounds. Though compiler flags must be set. In terms of this crate:
AFAIK the iodbc
driver manager does not support connection pooling.
Leave it up to the maintainers if they want to support iodbc by manually holding on to connections. However it would be a pity to opt out of the quite sofisticated solutions implemented by the other driver managers.
@pacman82 Thank you for a quick return on this. However since this is now supported Can it work with your pooling OR will it need to be implemented by myself for all of them as in this case I would prefer whatever type of pooling support you could grant me otherwise it wont work well with R2D2 without Pooling of some sort.
So the thing is, I didn't implement any pooling at all. Microsoft did. Quite sophisticated really. After that unixODBC also implemented that part of the standard. As far as I know iodbc
does not implement connection pooling. My quick search with duckduck go at least didn't find anything to the contrary.
At least it does not support driver aware connection pooling. We know that, because of the first error we saw in this thread.
All odbc-api
does is to enable the user to flip the switch, and tell their installed driver managers that they can pool connections.
IMHO: You could try catching the error around enabling connection pooling, to at least provide a better error to the user.
Alternatively you could provide your own implementation. Implementing your own connection pooling is tough. For example you have to take care about removing dead connections from the pool. Especially for applications which fire queries infrequent.
I would advice against implementing your own pooling, and just not support iodbc
. You can decide however to degrade gracefully: If the call to enable connection pooling fails, log an error and just let your implementation create a new connection every time. In this case it at least works, if so without the performance benefits.
the connection pool change to one per driver makes sense now that i look at it.
Unlikely, but you can ask @bbigras if it works for him now
https://github.com/genusistimelord/r2d2_odbc_api/pull/3 seems to help.
Without it, I get:
thread 'r2d2-worker-0' panicked at 'called `Result::unwrap()` on an `Err` value: FailedSettingConnectionPooling', /home/bbigras/src/r2d2_odbc_api/src/lib.rs:35:81
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'thread 'r2d2-worker-1' panicked at 'Once instance has previously been poisoned', /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/inline_lazy.rs:30:16
with it the pool is created.
I get a free(): invalid pointer
error now, but not sure where it's coming from yet.
Ok so it fixed part of the problem. But the other problem doesn't seem like it is on my end. @pacman82 you think it might be creating it but not actually creating the memory portion? Or double freeing it?
Without stacktrace, it's all just guessing. I've never seen ODBC native connection pooling used with iodbc before. Considering, that I would try pooling the connections on the application level first and see if the error persists.
thank you for your time @pacman82. Yeah sometime when i get more time ill need to test this.. though it mostly sounds like drop is being called twice upon this so maybe a simple println here or there might see if it is indeed freeing multiple times.
@bbigras Could you please give the new tree pooling a try?
https://github.com/AscendingCreations/r2d2_odbc_api/tree/pooling
I added a Self maintained Connection Pool which should hold onto the Connections for the ODBC. I think the free issue might of came from a Connection getting unloaded prematurely due tot he ODBC not actully doing pooling correctly.
@pacman82 I have set it so the ODBC pooling is set to Off. this seems to work quiet well as i changed the test to try it a few times and the connection Pool holding onto the Connection might solve part of a issue R2D2 had as it might of unloaded the ODBC connection after a Run and then it keep the Connection Pool for it which the handle was probably unloaded.
@genusistimelord with the pooling
branch, I'm able to create the pool but I get free(): invalid pointer
when I call pool.get()
:
#0 0x00007ff4c2bf8c1f in __pthread_kill_implementation () from /nix/store/bvy2z17rzlvkx2sj7fy99ajm853yv898-glibc-2.34-210/lib/libc.so.6
#1 0x00007ff4c2bae042 in raise () from /nix/store/bvy2z17rzlvkx2sj7fy99ajm853yv898-glibc-2.34-210/lib/libc.so.6
#2 0x00007ff4c2b9949c in abort () from /nix/store/bvy2z17rzlvkx2sj7fy99ajm853yv898-glibc-2.34-210/lib/libc.so.6
#3 0x00007ff4c2bed3f8 in __libc_message () from /nix/store/bvy2z17rzlvkx2sj7fy99ajm853yv898-glibc-2.34-210/lib/libc.so.6
#4 0x00007ff4c2c0229a in malloc_printerr () from /nix/store/bvy2z17rzlvkx2sj7fy99ajm853yv898-glibc-2.34-210/lib/libc.so.6
#5 0x00007ff4c2c03a5c in _int_free () from /nix/store/bvy2z17rzlvkx2sj7fy99ajm853yv898-glibc-2.34-210/lib/libc.so.6
#6 0x00007ff4c2c06491 in free () from /nix/store/bvy2z17rzlvkx2sj7fy99ajm853yv898-glibc-2.34-210/lib/libc.so.6
#7 0x00007ff4c2e8155a in SQLGetDiagRec_Internal () from /nix/store/0sdpvhzax1vbdl60n6r7k91hxfnz1lpp-libiodbc-3.52.15/lib/libiodbc.so.2
#8 0x00007ff4c2e81f1e in SQLGetDiagRec () from /nix/store/0sdpvhzax1vbdl60n6r7k91hxfnz1lpp-libiodbc-3.52.15/lib/libiodbc.so.2
#9 0x000055a7fdfc71b3 in odbc_api::handles::diagnostics::{impl#1}::diagnostic_record<odbc_api::handles::statement::StatementImpl> (self=0x7ff4c1af9a80, rec_number=1, message_text=...)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.40.2/src/handles/diagnostics.rs:184
#10 0x000055a7fdfc702b in odbc_api::handles::diagnostics::Diagnostics::diagnostic_record_vec<odbc_api::handles::statement::StatementImpl> (self=0x7ff4c1af9a80, rec_number=1, message_text=0x7ff4c1af98f0)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.40.2/src/handles/diagnostics.rs:136
#11 0x000055a7fdfcaabc in odbc_api::handles::diagnostics::Record::fill_from<odbc_api::handles::statement::StatementImpl> (self=0x7ff4c1af98f0, handle=0x7ff4c1af9a80, record_number=1)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.40.2/src/handles/diagnostics.rs:233
#12 0x000055a7fdf80c4e in odbc_api::handles::sql_result::SqlResult<bool>::into_result_with_trunaction_check<bool, odbc_api::handles::statement::StatementImpl> (self=..., handle=0x7ff4c1af9a80,
error_for_truncation=false) at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.40.2/src/error.rs:194
#13 0x000055a7fdf80aed in odbc_api::handles::sql_result::SqlResult<bool>::into_result<bool, odbc_api::handles::statement::StatementImpl> (self=..., handle=0x7ff4c1af9a80)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.40.2/src/error.rs:165
#14 0x000055a7fdf7fbb2 in odbc_api::execute::execute<odbc_api::handles::statement::StatementImpl> (statement=..., query=...)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.40.2/src/execute.rs:63
#15 0x000055a7fdf7f84c in odbc_api::execute::execute_with_parameters<odbc_api::handles::statement::StatementImpl, odbc_api::connection::{impl#1}::execute::{closure_env#0}<()>, ()> (lazy_statement=..., query=...,
params=()) at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.40.2/src/execute.rs:45
#16 0x000055a7fdf73832 in odbc_api::connection::Connection::execute<()> (self=0x7ff4c1afaaa8, query=..., params=())
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/odbc-api-0.40.2/src/connection.rs:116
#17 0x000055a7fdf73aa6 in r2d2_odbc_api::{impl#1}::is_valid (self=0x55a7ffb2ba38, conn=0x7ff4c1afaaa0) at /home/bbigras/.cargo/git/checkouts/r2d2_odbc_api-79f7627f8c8ba443/fce3ac2/src/lib.rs:112
#18 0x000055a7fdc7e397 in r2d2::Pool<r2d2_odbc_api::ODBCConnectionManager>::try_get_inner<r2d2_odbc_api::ODBCConnectionManager> (self=0x55a7ffb3dd50, internals=...)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/r2d2-0.8.9/src/lib.rs:466
#19 0x000055a7fdc7dce3 in r2d2::Pool<r2d2_odbc_api::ODBCConnectionManager>::get_timeout<r2d2_odbc_api::ODBCConnectionManager> (self=0x55a7ffb3dd50, timeout=...)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/r2d2-0.8.9/src/lib.rs:424
#20 0x000055a7fdc7ea63 in r2d2::Pool<r2d2_odbc_api::ODBCConnectionManager>::get<r2d2_odbc_api::ODBCConnectionManager> (self=0x55a7ffb3dd50)
at /home/bbigras/.cargo/registry/src/github.com-1ecc6299db9ec823/r2d2-0.8.9/src/lib.rs:411
[...]
odbc-api 0.40.2 features = [ "iodbc" ]
r2d2_odbc_api fce3ac233eb5cc173c57be0f8ea2527a2823e325
@bbigras can you show me the code you are using that causes this error? curious on that front. Also I will look deeper at what might be the issue.
@bbigras I think the issue might be due to this.
https://superuser.com/questions/1404151/invalid-pointer-with-odbc-on-centos-7
As this is a error that generally happens when the database cant be connected to from the odbc manager if the odbc driver is not correct or incorrectly setup. Might need to figure out exactly the driver you need and set it up correctly for your database type.
you can try to install it this way.
I'm already using the driver with the old abandoned odbc crate (odbc with r2d2_odbc).
Using odbc-api
directly works too IIRC.
hmm. this is a bit odd then.
can you show me how your using it directly?
I was wrong.
I got free(): invalid pointer
when using odbc-api directly.
I think I didn't have the problem in the past.
Right now I tried:
let environment = odbc_api::Environment::new()?;
let mut conn = environment.connect_with_connection_string(&conn_str)?;
use r2d2_odbc_api::buffers;
use r2d2_odbc_api::Cursor;
if let Some(cursor) = conn.execute("SELECT 1", ())? {
let mut buffers = buffers::TextRowSet::for_cursor(5000, &cursor, Some(4096)).unwrap();
let mut row_set_cursor = cursor.bind_buffer(&mut buffers).unwrap();
while let Some(batch) = row_set_cursor.fetch().unwrap() {
println!("line?");
// if let Some(val) = batch.at(0, 0) {
// println!("THREAD {} {}", i, str::from_utf8(val).unwrap());
// }
}
}
yeah it has something to do about incompatibility of the driver. is why the free() occurs. as i already found that part out. but I am unsure if its because of the version of the manager or the iodbc implemented etc.
@bbigras I added a feature to this crate called iodbc Could you please enable that feature.
r2d2_odbc_api = { git="https://github.com/AscendingCreations/r2d2_odbc_api.git", branch="pooling", features = ["iodbc"]} Also make sure any other direct odbc-api also enable that feature too.
I'll try.
Note that I was able to use odbc-api in the past https://github.com/pacman82/odbc-api/issues/148#issuecomment-1038344227 . I'm trying to figure out what changed.
oh I think the problem is with SELECT 1
.
Yeah, if I change SELECT 1
to SELECT some_id FROM some_table LIMIT 1
in r2d2_odbc_api, pool.get()?
and stmt.execute
are working.
I'm having some TryFromIntError
error now, but I think it might be from my input parameters.
@bbigras hmm that is weird maybe it is a statement issue for iodbc in odbc-sys as @pacman82 said he has no good way of testing this fully. Also maybe some new changes he made since 0.34 have been regressed a bit. would be nice to figure out. I think my issue was i did not have the iodbc feature setup for this library.
The TryFromIntError might be the way your setting it up or a issue on type mapping for IODBC.
@bbigras I will try to update this to the latest version of odbc-api in a bit. Also @pacman82 any information on the above issues he was having?
I'm trying to switch from r2d2_odbc to r2d2_odbc_api and I got this error while calling
ODBCConnectionManager::new((conn_str)
.odbc-api = "0.33" r2d2_odbc_api = "0.1.3"