I have started working on Microsoft Fabric data engineering POC using notebooks and pipelines and stuck in between, I am looking for below scenarios but did not get more details, looking for references if any please share,
When error occurs in PySpark notebook, what log does program output.
We need to design error handling rule and log format.
Transaction Management -
Can PySpark manage transaction scope ?
Ex) We'd like to commit after program loads Table A and Table B. If program fails to load Table B, then program execute rollback Table A.
Log Management - Does MS Fabric have log management function? Can we use LogAnalytics?
Backup/ restore function - Does MS Fabric have backup/restore function? can we use Synapse?
Hello All,
I have started working on Microsoft Fabric data engineering POC using notebooks and pipelines and stuck in between, I am looking for below scenarios but did not get more details, looking for references if any please share,
When error occurs in PySpark notebook, what log does program output. We need to design error handling rule and log format.
Transaction Management - Can PySpark manage transaction scope ? Ex) We'd like to commit after program loads Table A and Table B. If program fails to load Table B, then program execute rollback Table A.
Log Management - Does MS Fabric have log management function? Can we use LogAnalytics?
Backup/ restore function - Does MS Fabric have backup/restore function? can we use Synapse?