Closed luisrecuenco closed 2 years ago
When you have isRecording
enabled, any difference in the resulting snapshot shows up as a git diff. I generally re-record all snapshots in my project because a change may affect snapshots I did not anticipate. I suppose I could run tests with isRecording
disabled, then only re-record the failing ones, but that's a lot of extra work.
Currently whenever someone makes a PR with snapshots and they're on Intel, we've been re-recording them on Apple Silicon to ensure a consistent standard (because most of our contributors are using Apple Silicon). And it's just cleaner to enforce a standard because of this issue. Even though the CI uses Intel devices, it's not a problem there because of the precision parameters and isRecording
is always false
there.
Also sometimes we get snapshot tests that pass even though they shouldn't, which I suppose is because the perceptual precision is below 1 (necessary for the CI), so ultimately we have to re-record frequently.
We tried to use less
precision
but some of the snapshots failed (even with0.9
😓 ). We also tried usingUITraitCollection(displayGamut: .SRGB)
but it didn't work at all. So, in our case, in order to make easier our lives while we develop both using Intel and Apple Silicon we decided to remove all the shadows on our snapshot tests. As we didn't want to add this validation to the codebase that we deliver to production (through environment variables or preprocessor flags), we've used Swizzling only on the test target:extension CALayer { static func swizzleShadow() { swizzle(original: #selector(getter: shadowOpacity), modified: #selector(_swizzled_shadowOpacity)) swizzle(original: #selector(getter: shadowRadius), modified: #selector(_swizzled_shadowRadius)) swizzle(original: #selector(getter: shadowColor), modified: #selector(_swizzled_shadowColor)) swizzle(original: #selector(getter: shadowOffset), modified: #selector(_swizzled_shadowOffset)) swizzle(original: #selector(getter: shadowPath), modified: #selector(_swizzled_shadowPath)) } private static func swizzle(original: Selector, modified: Selector) { let originalMethod = class_getInstanceMethod(self, original)! let swizzledMethod = class_getInstanceMethod(self, modified)! method_exchangeImplementations(originalMethod, swizzledMethod) } @objc func _swizzled_shadowOpacity() -> Float { .zero } @objc func _swizzled_shadowRadius() -> CGFloat { .zero } @objc func _swizzled_shadowColor() -> CGColor? { nil } @objc func _swizzled_shadowOffset() -> CGSize { .zero } @objc func _swizzled_shadowPath() -> CGPath? { nil } }
Important: as we work in a framework, we had to create a class that acts as the
NSPrincipalClass
of the test bundle so the swizzle is called once. Something like this:final class MyPrincipalClass: NSObject { override init() { CALayer.swizzleShadow() } }
Note that after declaring the principal class, in order for it to work, you should add it to the
Info.plist
of test bundle:<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> ... ... <key>NSPrincipalClass</key> <string>NameOfTheTestBundle.MyPrincipalClass</string> </dict> </plist>
Did you see a noticeable drop-off in performance (test suite taking significantly longer) after applying this workaround Ari?
Did you see a noticeable drop-off in performance (test suite taking significantly longer) after applying this workaround Ari?
No, not at all when we measured it! To be honest, I think in some cases it might improve the performance because there's almost no overhead in rendering the shadows. On the other hand, I don't think any increase/drop-off in performance should be noticeable.
Edit: tested with simulator, not in real devices/device farm.
It seems that Apple Silicon machines generate snapshots that are slightly different from the ones generated on x86 machines. The problem is very similar to the one where the very same simulator generates different images depending on the OS version.
The issues persist when running the simulator using Rosetta.
Taking into account that it’ll be quite common in the near future to have a team of developers with Intel and Apple Silicon architectures… What’s the best way to handle it?
Thanks