Closed serban-dobroiu closed 1 year ago
The locks are by the JDBC while read / write to the sql table. Once the transaction is over, it automatically releases the lock. Spark MS SQL Connector does not add extra locking. SQL server keeps all records internally. e.g. Dynamic Management Views (DMV) sys.dm_tran_locks.
Close inactive issues
Hi, I'm trying to run a spark job and write the output to SQL Server db. While playing around with the batchsize option and having tableLock = true I had to kill some of the jobs but seems the table lock was not released. Is there anything I can do to ensure the lock gets released either on job failure or job getting killed?
Using spark-mssql-connector-1.0.1.jar and mssql-jdbc-11.2.0.jre8.jar
Thank you!