Open dconnolly131 opened 1 year ago
Are the partitions added in the database project? If you rebuild the project it should resolve the relationships between the partitions and the data compression option.
hi @dconnolly131 when # 3 happens does that mean this script needs to change? CREATE PARTITION FUNCTION pf_PartitionCompressionIssue AS RANGE FOR VALUES (1, 10, 100
I understand that you have concern about changing the source every time that happens but wondering doesn't this cause miss match schema with source and target?
@llali So it would need to for the Partition Function to resolve the discrepancy yes. But we have the option to ignore Partition Functions/Schemes which we do at our company as we have partitioned tables which creates a new partition for each day so we would have to manually update source control once a day if we didn't ignore.
The issue here is the DATA_COMPRESSION=PAGE functionality. If i define just :
WITH( DATA_COMPRESSION=PAGE) ON [ps_PartitionCompressionIssue] ([UserAccountID])
then i'm telling SQL i want all my partitions to be page compression.
SQLPackage then for some reason splits this out into a totally different syntax by using the partition scheme/function when it doesn't need to:
WITH (DATA_COMPRESSION = PAGE ON PARTITIONS (1), DATA_COMPRESSION = PAGE ON PARTITIONS (2), DATA_COMPRESSION = PAGE ON PARTITIONS (3), DATA_COMPRESSION = PAGE ON PARTITIONS (4)) ON [ps_PartitionCompressionIssue] ([UserAccountID])
This means i can ignore Partition Functions/Schemes all i want but i cant do anything about this data compression, which means we cant have CI/CD at our company because SQLPackage to try and "Resolve" wants to rebuild our 4TB tables.
I have struggled with the same issue and have never found an acceptable workaround. I would like to know what Microsoft's recommended solution is for using page level compression with dynamically-generated partitions.
@dconnolly131 thanks for the info. I'll investigate this more to find an acceptable solution.
Hi @llali was there any further update on this? Thanks
Hey @dconnolly131, Sorry for the delay.
Below are my findings and question:
Thanks
Hi @dconnolly131 , Do you have any update on above ask, we really want to know what stops you to update the source project with the target database changes of partition schemes and functions. Thanks.
We have a SQL agent job that runs once a day and checks all of our partitioned tables and creates new partitions where it is needed.
So its not feasible to be adding a partition into the database project once a day for tables that are partitioned daily. That would be a full time job for someone.
So while i agree that those 2 data compression syntaxes are the same, because its split out per partition in source control its the entire issue that is causing the discrepancy, if it was stored as All, it would be fine.
If i said i had 1000 tables that i partition with a daily partition function, would you recommend i update source control with the new partitions daily for each table?
I've also struggled with this problem and have never found a good solution.
It would be nice to have a set of publish profile flags that are granular enough solve this use case and not just ignore index properties across the board.
@ssreerama : This issue will impact anyone who implements rolling windows with partitioning in an automated fashion and pretty much blocks them from fully automating deployments. Would it be possible to add a flag to help us work around this and ignore these schema differences?
Hi @ssreerama , do we have an update on this issue. As @robnsilver says, this is a large issue for anybody that uses rolling windows for partitions and uses SSDT.
Steps to Reproduce: Full steps and example solution can be found in my repo https://github.com/DotdigitalDBA/sqlPackageTickets/tree/main/PartitionCompressionIssue
Basic Repro steps are below but ensure you check the repo above for detailed steps
Did this occur in prior versions? If not - which version(s) did it work in?
(DacFx/SqlPackage/SSMS/Azure Data Studio)