pointfreeco / swift-snapshot-testing

📸 Delightful Swift snapshot testing.
https://www.pointfree.co/episodes/ep41-a-tour-of-snapshot-testing
MIT License
3.77k stars 578 forks source link

Snapshots on Apple Silicon devices #424

Closed luisrecuenco closed 2 years ago

luisrecuenco commented 3 years ago

It seems that Apple Silicon machines generate snapshots that are slightly different from the ones generated on x86 machines. The problem is very similar to the one where the very same simulator generates different images depending on the OS version.

The issues persist when running the simulator using Rosetta.

Taking into account that it’ll be quite common in the near future to have a team of developers with Intel and Apple Silicon architectures… What’s the best way to handle it?

Thanks

ldstreet commented 3 years ago

One possible (bad) solution could be to allow for snapshot variants based on architecture. Unfortunately, this would mean recording multiple times which would almost certainly become a nightmare.

gobetti commented 3 years ago

this and a few other open issues seem to be duplicates of https://github.com/pointfreeco/swift-snapshot-testing/issues/313 but knowing of more scenarios leading to the same error is great for prioritizing. I've also recently upgraded to an M1 and can confirm that like in the original issue, this doesn't happen on iOS 12, and on iOS 13+ the difference image has pixels off most usually by 1, which isn't visible in the image itself (looks plain black) but post-filtering with a simple Threshold tool will reveal those pixels

edit: our team has a mix of Intel and Apple Silicon so the "near future" is right now 😅

jjatie commented 3 years ago

I have implemented a custom assertSnapshot in the referenced PR above for anyone needing to work in Silicon/Intel mixed teams. You can find it here on line 22.

While I don't think it's a good solution, I don't know what a better alternative would be.

voidless commented 3 years ago

In my case snapshots began rendering identically under Apple Silicon after I added 'Host Application' for unit test targets. Currently we use iOS 14.2 simulator for unit tests, there are rendering differences with 14.4.

I had to use Host App because Xcode 12.4 running under Rosetta can't launch unit tests without it. We run Xcode under Rosetta because we currently have 9 dependencies that don't support Apple Silicon

Before that I considered adding precision:0.99 to asserts, but it's also not a good solution. In my case there were differences in rounded corners clipping

luisrecuenco commented 3 years ago

Hey @voidless. Interesting to know about the "Host Application" fixing the issue. Unfortunately, the "Host Application" cannot be always used, so I'm afraid that's not an option for a lot of us.

voidless commented 3 years ago

Can you give an example? We use host apps generated by cocoapods, they only have almost empty appdelegate.m, main.storyboard files and info.plist and are linked with the corresponding framework.

luisrecuenco commented 3 years ago

I guess that could work, but I wonder if there's an easier way not to add all that noisy and cumbersome infrastructure (even if Cocoapods automates part of that). Also, AFAIK, you cannot have a host app for Swift Packages...

nuno-vieira commented 3 years ago

We have this issue only when rendering shadows. We were having this issue on a view that has a drop shadow. After removing the drop shadow from the view here, it works the same for both M1 and Intel. Still could not really understand why nor how to fix it yet 😞

UPDATE: We found out that the problem occurs when using shadowOpacity different than 1. So after switching from 0.85 to 1.0, the tests work on both M1 and Intel. For now, and since we are testing a reference app and not a production app, we changed the colour of the shadow to fake the opacity. And it is working fine.

tovkal commented 3 years ago

We also have a team with a mix of Intel and M1, plus our CI is now Intel but it is bound to be M1 someday. We were already using Host Application, we tried different iOS versions, tried with and without Rosetta both on Xcode.app and Simulator.app without any luck. Having different snapshots for each architecture would not work for us, as people with an M1 can't generate snapshots for Intel and viceversa, and that would be bound to fail in CI.

We opted to temporarily (until we are all using arm64) lower the precision, so small changes won't fail, by creating a new Snapshotting called .impreciseImage. Here's a gist in case somebody wants to try it.

ishayan18 commented 3 years ago

We also have the same problem when using shadowOpacity. tests are failing, we also changed the precious but not working

JWStaiert commented 3 years ago

I submitted a pull request adding a fuzzy compare implemented in objective-c which is 30 times faster than the vanilla matcher under worst-case conditions (all pixels compared.) This matcher will report failure if any pixel component absolute difference exceeds a user provided maximum -OR- if the average pixel component absolute difference exceeds a user provided maximum. Only CGImage, NSImage, and UIImage are supported but I will be adding fuzzy compare for other matchers soon.

grigorye commented 3 years ago

In my case snapshots began rendering identically under Apple Silicon after I added 'Host Application' for unit test targets.

I'm not 100% sure, but in our case we get differences in rendering some custom icon even when using host apps.

grigorye commented 3 years ago

I wonder if any kind of solution that employs imprecise matching would result in loads of "modified" snapshots generated every time they're re-recorded on a "mismatching" platform: as far as I understand, there's currently no way to avoid recording a new variant of a snapshot if the new version of the snapshot still fuzzy matches the original:

To re-record automatically, we delete the snapshots and run the tests - there's no chance in this case that fuzzy matching would prevent recording of new versions... OTOH, when "re-recording" manually, I can pass record: true to assertSnapshot (or set the global record to true), but as far as I understand, the current implementation of the assertSnapshot does not do any matching in this case either. But if it did the matching, and only recorded the snapshot if it does not match (according to the given matcher), this would be probably a solution for the above problem? (It would also require changing the approach for triggering re-recording as part of automation). A draft PR is here.

Overall, how do you people handle recording of imprecisely matching snapshots in general?

mcaylus commented 2 years ago

I submitted a pull request adding a fuzzy compare implemented in objective-c which is 30 times faster than the vanilla matcher under worst-case conditions (all pixels compared.) This matcher will report failure if any pixel component absolute difference exceeds a user provided maximum -OR- if the average pixel component absolute difference exceeds a user provided maximum. Only CGImage, NSImage, and UIImage are supported but I will be adding fuzzy compare for other matchers soon.

Any update on this? Do you have a branch you can refer to for this work?

Namedix commented 2 years ago

Is there a chance that this can be addressed somehow? @stephencelis

codeman9 commented 2 years ago

I've found that forcing the snapshots to be taken in sRGB seems to work. I do this in the device descriptions:

  public static func iPhoneSe(_ orientation: ViewImageConfig.Orientation)
    -> UITraitCollection {
      let base: [UITraitCollection] = [
        .init(displayGamut: .SRGB),
        .init(forceTouchCapability: .available),
        .init(layoutDirection: .leftToRight),
        .init(preferredContentSizeCategory: .medium),
        .init(userInterfaceIdiom: .phone)
      ]
...

  public static func iPhone8Plus(_ orientation: ViewImageConfig.Orientation)
    -> UITraitCollection {
      let base: [UITraitCollection] = [
        .init(displayGamut: .SRGB),
        .init(forceTouchCapability: .available),
        .init(layoutDirection: .leftToRight),
        .init(preferredContentSizeCategory: .medium),
        .init(userInterfaceIdiom: .phone)
      ]
...

Has anyone else tried this? May not be ideal for the final fix, but might be a clue.

tovkal commented 2 years ago

I've found that forcing the snapshots to be taken in sRGB seems to work. I do this in the device descriptions:

  public static func iPhoneSe(_ orientation: ViewImageConfig.Orientation)
    -> UITraitCollection {
      let base: [UITraitCollection] = [
        .init(displayGamut: .SRGB),
        .init(forceTouchCapability: .available),
        .init(layoutDirection: .leftToRight),
        .init(preferredContentSizeCategory: .medium),
        .init(userInterfaceIdiom: .phone)
      ]
...

  public static func iPhone8Plus(_ orientation: ViewImageConfig.Orientation)
    -> UITraitCollection {
      let base: [UITraitCollection] = [
        .init(displayGamut: .SRGB),
        .init(forceTouchCapability: .available),
        .init(layoutDirection: .leftToRight),
        .init(preferredContentSizeCategory: .medium),
        .init(userInterfaceIdiom: .phone)
      ]
...

Has anyone else tried this? May not be ideal for the final fix, but might be a clue.

I tried setting just the displayGamut trait, recorded all snapshots on an M1 and they failed on an Intel 🙁

ArielDemarco commented 2 years ago

We tried to use less precision but some of the snapshots failed (even with 0.9 😓 ). We also tried using UITraitCollection(displayGamut: .SRGB) but it didn't work at all. So, in our case, in order to make easier our lives while we develop both using Intel and Apple Silicon we decided to remove all the shadows on our snapshot tests. As we didn't want to add this validation to the codebase that we deliver to production (through environment variables or preprocessor flags), we've used Swizzling only on the test target:

extension CALayer {
    static func swizzleShadow() {
        swizzle(original: #selector(getter: shadowOpacity), modified: #selector(_swizzled_shadowOpacity))
        swizzle(original: #selector(getter: shadowRadius), modified: #selector(_swizzled_shadowRadius))
        swizzle(original: #selector(getter: shadowColor), modified: #selector(_swizzled_shadowColor))
        swizzle(original: #selector(getter: shadowOffset), modified: #selector(_swizzled_shadowOffset))
        swizzle(original: #selector(getter: shadowPath), modified: #selector(_swizzled_shadowPath))
    }

    private static func swizzle(original: Selector, modified: Selector) {
        let originalMethod = class_getInstanceMethod(self, original)!
        let swizzledMethod = class_getInstanceMethod(self, modified)!
        method_exchangeImplementations(originalMethod, swizzledMethod)
    }

    @objc func _swizzled_shadowOpacity() -> Float { .zero }
    @objc func _swizzled_shadowRadius() -> CGFloat { .zero }
    @objc func _swizzled_shadowColor() -> CGColor? { nil }
    @objc func _swizzled_shadowOffset() -> CGSize { .zero }
    @objc func _swizzled_shadowPath() -> CGPath? { nil }
}

Important: as we work in a framework, we had to create a class that acts as the NSPrincipalClass of the test bundle so the swizzle is called once. Something like this:

final class MyPrincipalClass: NSObject {
    override init() {
        CALayer.swizzleShadow()
    }
}

Note that after declaring the principal class, in order for it to work, you should add it to the Info.plist of test bundle:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
       ...
       ...
        <key>NSPrincipalClass</key>
    <string>NameOfTheTestBundle.MyPrincipalClass</string>
</dict>
</plist>
Iron-Ham commented 2 years ago

I've found a fix for this. It's not ideal, but it seemingly works for the moment: Run both Xcode and the Simulator in Rosetta. To run your simulator in Rosetta, right click on Xcode and choose "Show Package Contents", from there you navigate to "Contents > Developer > Applications," There you'll find the Simulator app. If you right click on it and choose "Get Info", you'll find an option to run it using Rosetta.

This has been sufficient for our >1,000 snapshot tests at GitHub, across app targets and frameworks/modules.

codeman9 commented 2 years ago

I've found the fix for this. It's not ideal, but it seemingly works for the moment: Run both Xcode and the Simulator in Rosetta. To run your simulator in Rosetta, right click on Xcode and choose "Show Package Contents", from there you navigate to "Contents > Developer > Applications," There you'll find the Simulator app. If you right click on it and choose "Get Info", you'll find an option to run it using Rosetta.

This has been sufficient for our >1,000 snapshot tests at GitHub, across app targets and frameworks/modules.

I tried this and some snapshots still fail for me. :(

westerlund commented 2 years ago

I've found the fix for this. It's not ideal, but it seemingly works for the moment: Run both Xcode and the Simulator in Rosetta. To run your simulator in Rosetta, right click on Xcode and choose "Show Package Contents", from there you navigate to "Contents > Developer > Applications," There you'll find the Simulator app. If you right click on it and choose "Get Info", you'll find an option to run it using Rosetta. This has been sufficient for our >1,000 snapshot tests at GitHub, across app targets and frameworks/modules.

I tried this and some snapshots still fail for me. :(

Yep, no luck for us either. It would be nice with some recognition about this issue from the repo owners.

ldstreet commented 2 years ago

My company created a script that downloads any failed snapshots from a CI build to your local working copy and places them in the correct directories. This kind of script is very useful for many reasons, but in this scenario you could have a workflow where:

1) Write your code and record snapshots locally with Intel 2) Run CI remotely with Apple Silicon 3) Run script to download all failed snapshots 4) Manually commit/push all valid updates.

This way you only have one set of snapshots all with Apple Silicon as the source of truth. As more developers move their machines to arm64 they can record locally, but I've found even in that scenario its still useful to have the option to download from CI.

grigorye commented 2 years ago

@ArielDemarco

We tried to use less precision but some of the snapshots failed (even with 0.9 😓 )

I can only recommend taking a look at this, that's a fuzzy comparator by @JWStaiert that properly accounts image nature of the snapshots: https://github.com/JWStaiert/SnapshotTestingEx. (See https://github.com/pointfreeco/swift-snapshot-testing/issues/424#issuecomment-866390494 for the details). As far as I can recall, the thing is that, by default, the precision is meant to account the number of different "pixels", not the difference in the "pixel values". The fuzzy comparator changes that for the better.

We're successfully using it together with the patch (see https://github.com/pointfreeco/swift-snapshot-testing/issues/424#issuecomment-921778987) that provides a solution for avoiding overwriting the snapshots if they're fuzzy matching.

alrocha commented 2 years ago

Gonna describe my current situation, and what I am facing, no idea if someone has said some of these that I am going to mention sorry for repetition if is the case, I have been reading a lot of stuff lately on how to fix it.

Current scenario: in the project I work for, CI Intel github actions, the iOS team is changing to M1 and I was the first one getting it (being the last one would saved me many headackes 😄), all the tests recorded are recorded with Intel machines, ~10% failing in the M1, and the new tests recorded with M1, fails in CI.

What I tried:

Looks like mainly tests with shadows shadowOpacity are the ones that are failing.

Tried out the modify the UITraitColleciton extension with .sRGB didn't work in my case.

Hope it helps someone, cheers!

lukeredpath commented 2 years ago

We're also having this problem as we start transitioning developer devices to Apple Silicon, while some of us are still going to be on Intel for a while longer. There are really two issues at play here:

Any solutions involving standardising where these things happen is difficult to achieve - some developers will be on Intel by choice, some are on M1. Our CI currently only supports Intel and we have no ETA on Apple Silicon support. That leaves us with needing to be able to verify snapshots accurately on Intel machines.

Of course, if snapshots need to always be verifiable on Intel, that means we cannot generate snapshots on Apple Silicon machines, which makes life very difficult for developers on M1s as they cannot generate new or updated snapshots unless we can find some kind of automated way (like a Github Action) that re-generates new and updated snapshots on an Intel executor and re-commits them. This is currently an option we're considering but I'd prefer to avoid this.

I think the best solution to this problem is a revised image diffing strategy that can account for minor differences between the two architectures. We have tried adjusting the precision with mixed results - sometimes it makes a test pass, sometimes it does not. Additionally, it seems that changing the precision has a serious impact on performance - I've seen a fairly big test with 10 snapshots take 10x longer by changing the precision from 1.0 to 0.99 - it took over 40 seconds!

I'd like to try the SnapshotTestingEx solution above but unfortunately it looks like this needs a patched version of SnapshotTesting - it would be great to get some movement on this. I'm curious if @stephencelis or @mbrandonw have run into this issue in their own apps and if they've found a solution.

Deco354 commented 2 years ago

We've tried changing the testname so you record different images based on your devices architecture.

This code can be placed in a wrapper for the snapshot testing so it's always applied or applied based on an isArchitectureDependent flag just for the problematic tests. It's a work around but it appears to do the job for now.

        var testID = identifier
        Architecture.isArm64 || Architecture.isRosettaEmulated {
            testID.append("-arm64")
        }

            assertSnapshot(matching: view,  testName: testID)

import Foundation

@objc(PSTArchitecture) public class Architecture: NSObject {
    /// Check if process runs under Rosetta 2.
    /// https://developer.apple.com/forums/thread/652667?answerId=618217022&page=1#622923022
    /// https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment
    @objc public class var isRosettaEmulated: Bool {
        // Issue is specific to Simulator, not real devices
        #if targetEnvironment(simulator)
        return processIsTranslated() == EMULATED_EXECUTION
        #else
        return false
        #endif
    }

    /// Returns true on M1 macs and false on intel machines or M1 macs using Rosetta
    public static var isArm64: Bool {
        #if arch(arm64)
        return true
        #else
        return false
        #endif
    }
}

fileprivate let NATIVE_EXECUTION = Int32(0)
fileprivate let EMULATED_EXECUTION = Int32(1)
fileprivate let UNKNOWN_EXECUTION = -Int32(1)

private func processIsTranslated() -> Int32 {
    let key = "sysctl.proc_translated"
    var ret = Int32(0)
    var size: Int = 0
    sysctlbyname(key, nil, &size, nil, 0)
    let result = sysctlbyname(key, &ret, &size, nil, 0)
    if result == -1 {
        if errno == ENOENT {
            return 0
        }
        return -1
    }
    return ret
}
olejnjak commented 2 years ago

For me the solution was to exclude ARM64 architecture for simulator by using EXCLUDED_ARCHS[sdk=iphonesimulator*] = "arm64" build setting, it effectively forces app to run in Rosette while Xcode and simulator still run natively. I had some issues with SPM dependencies but I was able to migrate them to Carthage.

MariusDeReus commented 2 years ago

Same problem here for tests in a Mac application. Not only NSViews render different, even CoreGraphics rendering of NSImages is different on M1 compared to Intel. I had this issue before between an Intel developer machine (with P3 screen) and a headless Mac mini in CI. But that was to a lesser degree and with a tolerance of 0.99 all succeeded. But between M1 and Intel some even still fail with 0.92. So that is too much tolerance for a meaningful test. Because this is a Mac app, I cannot play with simulator settings or things like that.

ernichechelski commented 2 years ago

Maybe this solution would be useful. Gives pretty promising and consistent output on Intel and m1 as well. Created with @karolpiateknet

import SnapshotTesting
@testable import YourProject
import SwiftUI
import XCTest

enum ContentSizeMode {
    /// Content is checked by its intrinsic content width.
    case horizontal
    /// Content is checked by its intrinsic content height.
    case vertical
    /// Content is checked by its intrinsic content size.
    case both
    /// Content is aligned to the screen dimensions.
    case none
}

/// Asserts that a given value matches a reference on disk.
/// Uses intrinsicContentSize height to test proper height of the element.
///
/// - Parameters:
///   - view: a view to be snapshot tested.
///   - name: An optional description of the snapshot.
///   - recording: Whether or not to record a new reference.
///   - timeout: The amount of time a snapshot must be generated in.
///   - contentSizeMode: You can choose which intrinsicContentSize axis should be taken into account. Default is vertical
///     as most components are aligned horizontally to the screen but have intrinsic height.
///   - precision: Default is 98 percent to make sure that color differences on m1 and intel doesn't fail snapshot tests.
///     As images are compared by just its raw data, the precision just describes the percentage of matched data (the docs says pixels, but imho is oversimplified description).
///   - file: The file in which failure occurred. Defaults to the file name of the test case in which this function was called.
///   - testName: The name of the test in which failure occurred. Defaults to the function name of the test case in which this function was called.
///   - line: The line number on which failure occurred. Defaults to the line number on which this function was called.
func assertSnapshot<Content: View>(
    view: Content,
    named name: String? = nil,
    record recording: Bool = false,
    timeout: TimeInterval = 5,
    contentSizeMode: ContentSizeMode = .vertical,
    precision: Float = 0.98,
    file: StaticString = #file,
    testName: String = #function,
    line: UInt = #line
) {
    let deviceWidth = ViewImageConfig.iPhone8.size?.width ?? 0
    let deviceHeight = ViewImageConfig.iPhone8.size?.height ?? 0

    let finalView: AnyView
    switch contentSizeMode {
    case .horizontal:
        finalView = AnyView(view.frame(height: deviceHeight))
    case .vertical:
        finalView = AnyView(view.frame(width: deviceWidth))
    case .both:
        finalView = AnyView(view)
    case .none:
        finalView = AnyView(view.frame(width: deviceWidth, height: deviceHeight))
    }

    let hostingController = UIHostingController(
        rootView: finalView.ignoresSafeArea()
    )

    let intrinsicContentWidth = hostingController.view.intrinsicContentSize.width
    let intrinsicContentHeight = hostingController.view.intrinsicContentSize.height
    var snapshotSize: CGSize
    switch contentSizeMode {
    case .horizontal:
        snapshotSize = CGSize(
            width: intrinsicContentWidth,
            height: deviceHeight
        )
    case .vertical:
        snapshotSize = CGSize(
            width: deviceWidth,
            height: intrinsicContentHeight
        )
    case .both:
        snapshotSize = CGSize(
            width: intrinsicContentWidth,
            height: intrinsicContentHeight
        )
    case .none:
        snapshotSize = CGSize(
            width: deviceWidth,
            height: deviceHeight
        )
    }
    let failure = verifySnapshot(
        matching: hostingController,
        as: .image(
            precision: precision,
            size: snapshotSize == .none ? nil : snapshotSize
        ),
        named: name,
        record: recording,
        timeout: timeout,
        file: file,
        testName: testName,
        line: line
    )
    guard let message = failure else { return }
    XCTFail(message, file: file, line: line)
}

Some views still requires wrapping them into padding(1) to achieve also correct snapshot for both architectures 😄

pimms commented 2 years ago

We worked around this issue a few weeks ago by allowing sub-pixels to deviate by a certain amount, as we were struggling with shadows, corner radii, and color-space differences. Instead of using precision: 0.9 (requiring 90% of the pixels to match 100%), we now use precision: 1, pixelDiffThreshold: 5 (requiring 100% of the subpixels to deviate no more than 5 values from the reference).

I meant to open a PR back then, but it seems like I forgot about it. I'll do it if there's any interest.

https://github.com/pimms/swift-snapshot-testing

lukeredpath commented 2 years ago

@pimms what is the performance of your solution like? We've noted that lowering the precision below 1 incurs a 10x performance penalty due to the pixel diffing - does yours have a similar penalty?

pimms commented 2 years ago

@lukeredpath No — we had the same issue, but ended up compiling SnapshotTesting (and only SnapshotTesting) with -O.

Update: @lukeredpath opened PR for the performance issues: #571

Longer explanation We noticed that at one point too _(after the M1 transition I believe?)_, and found out that it was caused by how range-iterations works when compiled without optimization. If I recall correctly, the unoptimized x86 code was a couple of orders of magnitude slower than ARM, causing our CI-builds to take forever. This loop is the offender: https://github.com/pointfreeco/swift-snapshot-testing/blob/main/Sources/SnapshotTesting/Snapshotting/UIImage.swift#L105 ``` // Gets chewed into horribly slow code for byte in 0..
lukeredpath commented 2 years ago

Thank you, would love it if you could open a PR for your pixel diffing strategy too.

pimms commented 2 years ago

@lukeredpath I combined the subpixel thresholding into the PR 👍

choulepoka commented 2 years ago

We have basically used @pimms fork with great success, it is still a wonder to me why it is not merged yet, because it does get the job done, without a performance penalty.

acecilia commented 2 years ago

One alternative solution, maybe useful for somebody. The best way to work around this issue for me was to avoid supporting both M1 and intel, and instead perform the snapshot testing in CI using exclusively M1 machines

choulepoka commented 2 years ago

@acecilia This does indeed would solve the issue. However, it is not feasible for the time being, because our CI/CD is running on Github, which is Intel-Based for the foreseeable future.

We have switched to a self-hosted runner for performance reason, but AWS is not giving access to M1-Based Mac-Minis to the general public yet, we're still stuck with a Intel-based machine for now.

acecilia commented 2 years ago

@choulepoka I see. Yes, in my case the machines are self hosted, so I could just replace intel with M1 machines without having to wait for third party support

westerlund commented 2 years ago

@choulepoka @acecilia self-hosted runners are not supported on M1 hardware yet. https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners#architectures

tahirmt commented 2 years ago

It's easy to get out of rosetta and run the builds natively by just doing arch -arm64 ... to run xcodebuild natively.

gistya commented 2 years ago

Any idea why this is happening? I.e. why does the same simulator produce different snaps based on whatever the current hardware is?

Is it due to differences in the color space actively selected? I.e. if all Macs have their display set to Generic RGB Profile, does this help?

If not, doesn't this represent a bug in iOS simulator that Apple should fix? I cannot understand why the simulation would differ on this detail based on the host hardware.

keith commented 2 years ago

It sounds like all bets are off when rendering across architectures as they can theoretically go through entirely different rendering pipelines.

carsten-wenderdel commented 2 years ago

@choulepoka @acecilia self-hosted runners are not supported on M1 hardware yet. https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners#architectures

There now is a prerelease of the runner software with Apple Silicon support: https://github.com/actions/runner/releases/tag/v2.292.0

Motobard commented 2 years ago

I submitted a pull request adding a fuzzy compare implemented in objective-c which is 30 times faster than the vanilla matcher under worst-case conditions (all pixels compared.) This matcher will report failure if any pixel component absolute difference exceeds a user provided maximum -OR- if the average pixel component absolute difference exceeds a user provided maximum. Only CGImage, NSImage, and UIImage are supported but I will be adding fuzzy compare for other matchers soon.

I was unable to find it. Can you provide us with a pointer?

markst commented 2 years ago

I was unable to find it. Can you provide us with a pointer?

was this it? - https://github.com/pointfreeco/swift-snapshot-testing/pull/481

JWStaiert commented 2 years ago

https://github.com/pointfreeco/swift-snapshot-testing/pull/490

490 makes some changes to the API, possibly not optimally, to allow external extensions to SnapshotTesting, and updates the docs to point to my SnapshotTesingEx package where the matching routines are implemented.

On Jun 15, 2022, at 6:31 AM, Mark Turner @.***> wrote:

 I was unable to find it. Can you provide us with a pointer?

was this it? - #481

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.

jlcvp commented 2 years ago

Testing with a empty screen with navigation bar and bottom bar with some icons, the image diffs between our M1 Macs and our Intel ones are on the SFSymbols buttons we use on the botton bar about 2~3 pixels on each one. Setting the precision to 0.99 make it work as intended for now.

I'll be following this discussion to see if we can figure out a better solution

ejensen commented 2 years ago

https://github.com/pointfreeco/swift-snapshot-testing/pull/628 should resolve this issue with a new perceptual difference calculation. The small differences in Intel/Apple Silicon hardware accelerated anti-aliasing/shadows/blur rendering are under the 2% DeltaE value which is nearly imperceivable by the human eye. Using a perceptualPrecsion value of >=98% will prevent imperceivable differences from failing assertions while noticeable differences are still caught.

simondelphia commented 1 year ago

@ejensen

Note even with perceptual precision, if you re-record snapshots that were originally recorded on an Intel device on an Apple Silicon device, you will often get updated snapshots (with imperceptible differences) in your git diff, which is not ideal because then it's not clear whether the differences are due to actual changes or just the different chips.

Is there any fix for that?

ldstreet commented 1 year ago

@simondelphia isn't this true of any test that uses precision? I'd think failing tests should be your guide on whether or not to re-record, not git diffs. No?