microsoft / sql-spark-connector

Apache Spark Connector for SQL Server and Azure SQL
Apache License 2.0
274 stars 116 forks source link

how to ensure table lock is released #203

Closed serban-dobroiu closed 1 year ago

serban-dobroiu commented 1 year ago

Hi, I'm trying to run a spark job and write the output to SQL Server db. While playing around with the batchsize option and having tableLock = true I had to kill some of the jobs but seems the table lock was not released. Is there anything I can do to ensure the lock gets released either on job failure or job getting killed?

Using spark-mssql-connector-1.0.1.jar and mssql-jdbc-11.2.0.jre8.jar

Thank you!

luxu1-ms commented 1 year ago

The locks are by the JDBC while read / write to the sql table. Once the transaction is over, it automatically releases the lock. Spark MS SQL Connector does not add extra locking. SQL server keeps all records internally. e.g. Dynamic Management Views (DMV) sys.dm_tran_locks.

luxu1-ms commented 1 year ago

Close inactive issues