Streaming currently does not make progress if there is no existing checkpoint file for the given partition/s. This issue typically happens at the very beginning of the stream job in the cases when the "ChangeFeedStartFromTheBeginning" is set to false.
The Read checkpoint path given with the ADLS (abfss://) or Blob (wasb://) path are not recognized and the checkpoint files are written to the Default FS. For example, on databricks the Read checkpoint files are written to "dbfs:///" when an ADLS or Blob path if provided because that is the "fs.defaultFS" hadoop config.
Changes:
set the next continuation token with the getResponseContinuation from feedResponse when the given partition does not have existing token
Use the new URI form for hadoop fs creation by passing the CheckpointLocation
Issues:
Streaming currently does not make progress if there is no existing checkpoint file for the given partition/s. This issue typically happens at the very beginning of the stream job in the cases when the "ChangeFeedStartFromTheBeginning" is set to false.
The Read checkpoint path given with the ADLS (abfss://) or Blob (wasb://) path are not recognized and the checkpoint files are written to the Default FS. For example, on databricks the Read checkpoint files are written to "dbfs:///" when an ADLS or Blob path if provided because that is the "fs.defaultFS" hadoop config.
Changes:
set the next continuation token with the getResponseContinuation from feedResponse when the given partition does not have existing token
Use the new URI form for hadoop fs creation by passing the CheckpointLocation