From Synapse's Apache Spark pool, Is it possible to write to an existing internal/external table? All examples are related to creating and loading the data to a new table. Even though the Synapse pipeline runs on spark, how does it manage to select/update? To have the same list of features through Pyspark/Scala, do we need to switch to Databricks instead of Apache spark pool comes with Synapse?
From Synapse's Apache Spark pool, Is it possible to write to an existing internal/external table? All examples are related to creating and loading the data to a new table. Even though the Synapse pipeline runs on spark, how does it manage to select/update? To have the same list of features through Pyspark/Scala, do we need to switch to Databricks instead of Apache spark pool comes with Synapse?