steffengy / tiberius

TDS 7.4 (mssql / Microsoft SQL Server) async driver for rust. Fork at: https://github.com/prisma/tiberius
Apache License 2.0
151 stars 2 forks source link

Panics at larger datasets #79

Closed cfsamson closed 5 years ago

cfsamson commented 5 years ago

I have a query requesting 10 000 rows of a 21 column dataset (only varchar(255), and f64 and i32 fields. I get the error message:

hread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Canceled', libcore/result.rs:945:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.

If I only request 1000 rows the query completes successfully. Do you have any idea what's causing the problem?

Rust version: stable 1.29.0

I had to load Tiberius this way due to beeing on a Mac, and not having a certificate:

tiberius = { version = "0.3.0", default-features=false, features=["chrono"] } 
cfsamson commented 5 years ago

It panics when loading more than 3108 rows to be exact.

cfsamson commented 5 years ago

After investigating a bit, I can't see any pattern to this. It seems like it panics rather randomly, i.e. it might panic on row 3108, but if I query for TOP(3000) on rows > 3100 it panics on row 3360. Then if I set it to query for row > 3300 it doesn't panic at all... The only pattern I see is that it happens after parsing some hundred or some thousand rows on some random combination of events. But once it panics it panics on the same row every time... I can't figure this out.

cfsamson commented 5 years ago

Ok, so after polling the stream and printing the errors I get a more detailed response:

Protocol("floatn: length of 192 is invalid")
Protocol("floatn: length of 19 is invalid")
thread 'main' panicked at 'not yet implemented', /Users/carlfredriksamson/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/types/mod.rs:326:34
note: Run with `RUST_BACKTRACE=1` for a backtrace.
brokenthorn commented 5 years ago

Could be a float(n) type not being supported. n (mantissa bits) is between 1 and 53.

cfsamson commented 5 years ago

Scratched my head on this one for quite some time, but if it's a datatype that's not implemented shouldn't it panic on the first record then? The panics seemed rather erratic when I tested different selections of the dataset...

cfsamson commented 5 years ago

Here is the output of printing a row object, it shows the structure of the data transmitted. Note that this was parsed without a problem.

QueryRow(TokenRow { meta: TokenColMetaData { columns: [MetaDataColumn { base: 
BaseMetaDataColumn { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Datetimen, 8, 
None) }, col_name: "Date" }, MetaDataColumn { base: BaseMetaDataColumn { flags: CDF_NULLABLE 
| CDF_UPDATEABLE, ty: VarLenSized(Floatn, 8, None) }, col_name: "Name1" }, MetaDataColumn { 
base: BaseMetaDataColumn { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Floatn, 8,
 None) }, col_name: "Name2" }, MetaDataColumn { base: BaseMetaDataColumn { flags: 
CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Floatn, 8, None) }, col_name: "Name3" }, 
MetaDataColumn { base: BaseMetaDataColumn { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: 
VarLenSized(Floatn, 8, None) }, col_name: "Name4" }, MetaDataColumn { base: BaseMetaDataColumn 
{ flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Floatn, 8, None) }, col_name: "Name5" }, 
MetaDataColumn { base: BaseMetaDataColumn { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: 
VarLenSized(Floatn, 8, None) }, col_name: "Name6" }, MetaDataColumn { base: BaseMetaDataColumn
 { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Floatn, 8, None) }, col_name: "Name7" },
 MetaDataColumn { base: BaseMetaDataColumn { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: 
VarLenSized(Floatn, 8, None) }, col_name: "Name8" }, MetaDataColumn { base: BaseMetaDataColumn 
{ flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Floatn, 8, None) }, col_name: "Name9" }, 
MetaDataColumn { base: BaseMetaDataColumn { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: 
VarLenSized(Floatn, 8, None) }, col_name: "Name10" }, MetaDataColumn { base: BaseMetaDataColumn 
{ flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Floatn, 8, None) }, col_name: "Name11" },
 MetaDataColumn { base: BaseMetaDataColumn { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: 
VarLenSized(Intn, 4, None) }, col_name: "Name12" }, MetaDataColumn { base: BaseMetaDataColumn {
 flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(BigVarChar, 15, Some(Collation { info: 
13632518, sort_id: 0 })) }, col_name: "Name13" }, MetaDataColumn { base: BaseMetaDataColumn { 
flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(BigVarChar, 10, Some(Collation { info: 
13632518, sort_id: 0 })) }, col_name: "Name14" }, MetaDataColumn { base: BaseMetaDataColumn { 
flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(BigVarChar, 15, Some(Collation { info: 
13632518, sort_id: 0 })) }, col_name: "Name15" }, MetaDataColumn {base: BaseMetaDataColumn { 
flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Floatn, 8, None) }, col_name: "Name16" },
 MetaDataColumn { base: BaseMetaDataColumn { flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: 
VarLenSized(Floatn, 8, None) }, col_name: "Name17" }, MetaDataColumn { base: BaseMetaDataColumn 
{ flags: CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Intn, 4, None) }, col_name: "Name18" },
 MetaDataColumn { base: BaseMetaDataColumn { flags: CDF_UPDATEABLE_UNKNOWN, ty: 
FixedLen(Int4) }, col_name: "Name19" }, MetaDataColumn { base: BaseMetaDataColumn { flags: 
CDF_NULLABLE | CDF_UPDATEABLE, ty: VarLenSized(Intn,4, None) }, col_name: "Name20" }] }, 
columns: [DateTime(DateTime { days: 42004, seconds_fragments: 0 }), F64(0.0), F64(0.0), F64(5.0),
 F64(92.35), F64(18.47), F64(0.0), F64(0.0), F64(0.0), F64(5.0), F64(92.35), F64(100.0), I32(5), 
String("10"), String("XXX040"), String("XXX040"), F64(4.01), F64(10.44), I32(20150102), I32(99999),
 None] })
cfsamson commented 5 years ago

This is the data in the 3109th row that panics:

2015-01-03 00:00:00.000 
0   
0   
5   
29.35   
5.87    
2   
11.74   
40  
3   
17.61   
60  
27  
53  
NULL    
NULL    
NULL    
NULL    
20150103    
33605   
NULL

It seems to be this combination of data that causes the panic, however, I can see more rows that looks like this that don't panic so I can't see the pattern visually...

brokenthorn commented 5 years ago

Try selecting all your data with order by primary key. See where it panics and if it's consistent while selecting various top x rows, or inconsistent like before. Not ordering the result set might be what's causing the randomness in the resulting row that causes the panic.

cfsamson commented 5 years ago

OK, so I tried this, this is what happens. Basically I have this query: select top(4000) * from table t1 where t1.id > 0 order by t1.id

I print every ID to see where it panics:

So the ID's in this table starts at 30497:

ex1: id > 0 => panics at 33605 (every time)
ex2: id > 30497 => doesn't panic (wow, should be the same as id > 0)
ex3: id > 30500 => panics at 31412 (every time)
ex4: id > 31000 => panics at 31731 (every time)
ex5: id > 32000 => panics at 32441 (every time)
ex6: id > 32400 => panics at 33940 (every time)

As you see, there is no pattern to this, the only one that didn't panic before 4000 rows where fetched was ex2 but if I increase the amount of rows to fetch it panics at 35435...

cfsamson commented 5 years ago

Just adding information here as I try to find out what's causing the panic.

I tried to select every single column + Id and see if I could locate where the panic was, but when I select 1 and 1 column together with the Id nothing is panicing, so it seems to either be.

So I have no idea on how to help this further really. It doesn't seem to be related to corrupt data or anything like that.

It does seem to be predictible (I've tried now on both windows and mac) in that it panics at Id 33605 every time when using the query in "ex1" above (and the same with "ex2" on two different operating systems), but it's not after a certain amount of records or any clear pattern.

I checked the text fields to see if any of them might have a big amount of data that might vary, but the text fields are varchr(10), varchar(15) and varchar(10).

I thought I for completeness will give the column types for the table in the database if anyone find a way to replicate this:

Column Data type Allow nulls
row1 datetime Checked
row2 float Checked
row3 float Checked
row4 float Checked
row5 float Checked
row6 float Checked
row7 float Checked
row8 float Checked
row9 float Checked
row10 float Checked
row11 float Checked
row12 float Checked
row13 int Checked
row14 varchar(15) Checked
row15 varchar(10) Checked
row16 varchar(15) Checked
row17 float Checked
row18 float Checked
row19 int Checked
row20 int Unchecked
row21 int Checked
brokenthorn commented 5 years ago

Try select * from table t1 where t1.id >= 34497 and t.id <= 30500 order by t1.id. Tell me how it does. I'm trying to rule out some bad data between ext2 and ext3, where you've left out some values (ex1 selects 30497+4000 = 34497 rows which is less than the next beginning row of ex3, 30500).

brokenthorn commented 5 years ago

Also you might wanna try selecting everything again, but this time check for nulls on nullable columns and replace them with default values (ie. try IFNULL(col, default_val)). By the way, selecting everything like select * is a bad idea. Always specify and order your columns.

cfsamson commented 5 years ago

@brokenthorn Ok, so I tried what you suggested (other than I had to rewrite the sql when to: select * from table t1 where t1.id >= 34497 or t1.id <= 30500 order by t1.id or no records showed up :) And I rewrote so that I specified each column.

Now it panics on Id: 35822.

So I also tried your second suggestion, but got this: thread 'main' panicked at 'unsupported fixed type decoding: Float8'

So it seems that providing a fixed type doesn't work. I tried to add many decimals but it still reads it as Float8

cfsamson commented 5 years ago

@brokenthorn Finally!! I got somwhere!

OK removing 1 and 1 colum didn't work, BUT when I removed the two varchar(15) columns - NO PANIC. (I'll have to triple check that those in fact are the two varchar 15 columns or if one of them is the VARCHAR 10 column, but I'm pretty sure).

So if I'm not mistaken, there seems to be a correlation between parsing VARCHAR(15) data and the panic, VARCHAR(10) seems to work fine...

cfsamson commented 5 years ago

Could it be related to this in types/mod.rs?

skjermbilde 2018-09-21 kl 21 03 22
brokenthorn commented 5 years ago

@brokenthorn Ok, so I tried what you suggested (other than I had to rewrite the sql when to: select * from table t1 where t1.id >= 34497 or t1.id <= 30500 order by t1.id or no records showed up :) And I rewrote so that I specified each column.

Now it panics on Id: 35822.

So I also tried your second suggestion, but got this: thread 'main' panicked at 'unsupported fixed type decoding: Float8'

So it seems that providing a fixed type doesn't work. I tried to add many decimals but it still reads it as Float8

Sorry about the numbers. I got them wrong... must be tired.

brokenthorn commented 5 years ago

Could it be related to this in types/mod.rs?

skjermbilde 2018-09-21 kl 21 03 22

Shouldn't be. Both varchar and nvarchar are implemented. My guess is that you have invalid unicode characters in the data. What are your database and/or column collations?

cfsamson commented 5 years ago

I'll see if I can get more info about the setup, but it works without issues on pymssql driver, and Microsofts drivers, so I have a feeling it's not the data... Visually inspecting a couple of hundred rows show all of them in this pattern:

col1 col2
K11 KON040

Uppercase mix of normal alphanumeric characters, no language specific characters.

cfsamson commented 5 years ago

Ok, so maybe this can help. If I only query the two varchar(15) colums that I assume is causing problems I get this error (the numbers printed are recordId's).

35594
35595
35596
9112834
thread 'main' panicked at 'invalid token received 0x0', /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/transport.rs:413:29
note: Run with `RUST_BACKTRACE=1` for a backtrace.

And:

34347
34348
8793346
thread 'main' panicked at 'invalid token received 0x0', /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/transport.rs:413:29
note: Run with `RUST_BACKTRACE=1` for a backtrace.

I now query like this:

SELECT 
t1.Id,
t1.VarChar(15)Column

FROM table t1 
ORDER BY t1.Id

Btw, there is a record with Id's 9112834 and 9112835 (which should be the one causing the panic), both look perfectly normal, but there is no record with Id 8793346 or with 8793347...

cfsamson commented 5 years ago

So I ran the last example with backtrace and got this. Hope that helps to figure out whats wrong:

thread 'main' panicked at 'invalid token received 0x0', /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/transport.rs:413:29
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::print
             at libstd/sys_common/backtrace.rs:71
             at libstd/sys_common/backtrace.rs:59
   2: std::panicking::default_hook::{{closure}}
             at libstd/panicking.rs:211
   3: std::panicking::default_hook
             at libstd/panicking.rs:227
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at libstd/panicking.rs:475
   5: std::panicking::continue_panic_fmt
             at libstd/panicking.rs:390
   6: std::panicking::try::do_call
             at libstd/panicking.rs:345
   7: tiberius::transport::TdsPacketId::next
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/transport.rs:413
   8: <tiberius::transport::TdsTransport<I>>::read_token::{{closure}}
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/transport.rs:463
   9: <tiberius::query::QueryStream<I> as futures::stream::Stream>::poll
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/query.rs:148
  10: <tiberius::stmt::QueryResult<S> as futures_state_stream::StateStream>::poll
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/stmt.rs:304
  11: <futures_state_stream::ForEach<S, F> as futures::future::Future>::poll
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-state-stream-0.1.1/src/lib.rs:1009
  12: <futures::future::chain::Chain<A, B, C>>::poll
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/future/chain.rs:32
  13: <futures::future::and_then::AndThen<A, B, F> as futures::future::Future>::poll
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/future/and_then.rs:32
  14: <futures::task_impl::Spawn<T>>::poll_future_notify::{{closure}}
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/mod.rs:314
  15: <futures::task_impl::Spawn<T>>::enter::{{closure}}
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/mod.rs:388
  16: futures::task_impl::std::set
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/std/mod.rs:78
  17: <futures::task_impl::Spawn<T>>::enter
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/mod.rs:388
  18: <futures::task_impl::Spawn<T>>::poll_future_notify
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/mod.rs:314
  19: futures::task_impl::std::<impl futures::task_impl::Spawn<F>>::wait_future::{{closure}}
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/std/mod.rs:231
  20: futures::task_impl::std::ThreadNotify::with_current::{{closure}}
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/std/mod.rs:478
  21: <std::thread::local::LocalKey<T>>::try_with
             at /Users/travis/build/rust-lang/rust/src/libstd/thread/local.rs:294
  22: <std::thread::local::LocalKey<T>>::with
             at /Users/travis/build/rust-lang/rust/src/libstd/thread/local.rs:248
  23: futures::task_impl::std::ThreadNotify::with_current
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/std/mod.rs:478
  24: futures::task_impl::std::<impl futures::task_impl::Spawn<F>>::wait_future
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/task_impl/std/mod.rs:228
  25: futures::future::Future::wait
             at /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-0.1.24/src/future/mod.rs:299
  26: myprogram::main
             at src/main.rs:47
  27: std::rt::lang_start::{{closure}}
             at /Users/travis/build/rust-lang/rust/src/libstd/rt.rs:74
  28: std::panicking::try::do_call
             at libstd/rt.rs:59
             at libstd/panicking.rs:310
  29: panic_unwind::dwarf::eh::read_encoded_pointer
             at libpanic_unwind/lib.rs:105
  30: <std::sync::mutex::Mutex<T>>::new
             at libstd/panicking.rs:289
             at libstd/panic.rs:392
             at libstd/rt.rs:58
  31: std::rt::lang_start
             at /Users/travis/build/rust-lang/rust/src/libstd/rt.rs:74
  32: myprogram::main
steffengy commented 5 years ago

Errors in these kind of cases are rarely helpful, because something goes wrong before them. Could you try to produce a minimal set to reproduce? (Get rid of rows that work, columns that do not cause an effect until there's only one row&column left in the best case)

cfsamson commented 5 years ago

@steffengy yes I can.

The minimal example I can give is this:

Given the following schema:

Column Data type Allow nulls
row14 varchar(15) yes
row20 int no

If I query for this it panics after 3854 rows every time (I think it panics trying to read row 3855 but not sure):

SELECT t1.row20, t1.row14 from table t1 order by t1.row20

(if I select only row20 or only row14 it doesn't panic)

All the data in the rows are like this

row20 row14
34340 BAK040
34341 KON040
34342 B10
34343
34344 KON020
34345

I tried to create a new DB on a hosted service and transfer the data, so it could be easy replicable but I didn't find a free service working well enough for this, and working from a Mac it's not so easy to set up a second server to replicate. If needed I can transfer the relevant data as a csv I think, but really, it's all the same kind of data as above in the two columns of interest.

cfsamson commented 5 years ago

I've looked at 5 different panics by just setting a where t1.id > last_panic_id and It seems to happen every time on a row when row14 is NULL if that helps to know. Now NULL is pretty common in row14 but I would guess < 10 % is NULL so it doesn't seem entirely coincidental.

cfsamson commented 5 years ago

And lastly, it's not possible to find only one row. I can select a single row that it panics on when parsing many rows and it works fine, it only panics when parsing many rows - somwhere between 400-10000 usually. The smallest number of rows I have parsed and gotten a panic on was 421 rows with the exact same query as above.

steffengy commented 5 years ago

Can you provide a .SQL file and query I can try to reproduce with, so far locally I haven't been able to?

cfsamson commented 5 years ago

Would it help if I parsed out data to a csv you can import to your local db? The code that panics is this, it also shows the query:

extern crate futures;
extern crate futures_state_stream;
extern crate tiberius;

use futures::Future;
use futures_state_stream::StateStream;
use tiberius::query::QueryRow;
use tiberius::SqlConnection;

fn main() {
  let connstr = "server=tcp:[iptoserver,port]; TrustServerCertificate=true; database=MyDB; UID=cantshow; PWD=cantshow";
  let qry = "SELECT t1.number, t1.text from MyDB.dbo.table t1 order by t1.number";
  let mut counter = 1;
  let future = SqlConnection::connect(&connstr).and_then(|conn| {
    conn.simple_query(qry).for_each(|row: QueryRow| {
      println!("{}", counter);
      counter += 1;
      Ok(())
    })
  });

  future.wait().unwrap();
}
steffengy commented 5 years ago

I'd prefer a SQL export of the table, if possible. (using SQL Server Management Studio/SSMS thats quite easy)

cfsamson commented 5 years ago

OK. I'll log on to the office and see, do you only need the two colums causing trouble?

Any way I can send it to you privately?

steffengy commented 5 years ago

If that's enough to reproduce, whatever columns and rows are needed to trigger the bug on your side. I'd advise to just make an SQL dump, reduce & redact/replace with dummy data & import on a test server & retest with an query and then we're one step closer to a minimal reproduction.

Basically just give me a SQL (not containing any sensitive/real data please) that creates a table with as little data as possible and a query so i just have to import & run the query to reproduce.

cfsamson commented 5 years ago

I'll try

brokenthorn commented 5 years ago

Might I add that the crashing row might not be the last row that's printed but the one after that? For example, here:

35594 35595 35596 9112834 thread 'main' panicked at 'invalid token received 0x0', /Users/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tiberius-0.3.0/src/transport.rs:413:29 note: Run with RUST_BACKTRACE=1 for a backtrace.

It might not be the row with id 9112834 that's causing the panic because the error seems to be happening in transport (TCP? TDS Stream?) so if you can print a row, it has already been transported.

cfsamson commented 5 years ago

@brokenthorn Yes, I actually assumed that, but the next row in the sequence is 35597 (in the example above) so why it jumps to 9112834 is strange... I'm away for two weeks now and I couldn't finish @steffengy's request before that other than I got a minimum set of data over to a local server and replicated the crash there as well so it looks like a real issue, but I need more time to try to pinpoint exactly which conditions causes the crash... If I remove one column from the 21 column dataset, the crash seem to disappear so it's complex. The next step is to change the data in the rows, and hopefully post a set of data/.SQL files here that will replicate the situation.

cfsamson commented 5 years ago

@steffengy OK, so I finally managed to create a set of instructions to recreate the bug with mock data:

This is the sql file including the minimum amount of data I could include to reliably reproduce the bug. In your MS SQL server instance (I have only tested it on MS SQL EXPRESS) :

  1. create a database called "tibtest" or a new db with a name of your choise and change the first line in the sql script to match that name after you have opened it in step 2: USE [YourDbName]

  2. Unzip the script, open the file and execute the script

tiberius_bug_sql.zip

  1. Use the code below and it will panic on count 4662 (at least on my machine) - you must change the connection string to match your own setup.

main.rs:

extern crate futures;
extern crate futures_state_stream;
extern crate tiberius;

use futures::Future;
use futures_state_stream::StateStream;
use tiberius::query::QueryRow;
use tiberius::SqlConnection;

fn main() {
  let connstr = "server=tcp:localhost,53354; integratedSecurity=true;";
  let qry = "SELECT TOP(40000) * from test.dbo.tibdata t1 order by t1.integer3_notnull";
  let mut counter = 1;
  let future = SqlConnection::connect(&connstr).and_then(|conn| {
    conn.simple_query(qry).for_each(|row: QueryRow| {
      println!("{}", counter);
      counter += 1;
      Ok(())
    })
  });

  future.wait().unwrap();

}

Cargo.toml:

[package]
name = "tiberius"
version = "0.1.0"
authors = ["cf"]
edition = "2018"

[dependencies]
tiberius = {version = "0.3.0", default-features=false, features=["chrono"]}
futures = "0.1.18"
futures-state-stream = "0.1.0"
chrono = "0.4.6"
steffengy commented 5 years ago

For me that runs until 9503 without any error. You have reproduced using the latest master right? Which MSSQL server version is that, I am using 2016.

cfsamson commented 5 years ago

Strange. I'm pulling version "0.3.0" from cargo, I havent't tried reproducing from master.

Sql server is the latest version (version 14). Product version: 14.0.1000 RTM

I can try to build from master and see if the problem's still there.

cfsamson commented 5 years ago

OK, so on master it works, at least on the sample data locally. I'll check later so confirm it works on the main database as well.

cfsamson commented 5 years ago

@steffengy it works without any apparent issues on master when testing on larger datasets as well. Closing this issue then.