beeware / briefcase

Tools to support converting a Python project into a standalone native application.
https://briefcase.readthedocs.io/
BSD 3-Clause "New" or "Revised" License
2.5k stars 354 forks source link

Implement persistent build tracking for more intuitive behavior #1714

Open rmartin16 opened 3 months ago

rmartin16 commented 3 months ago

Changes

Related PRs

Relevant issues

PR Checklist:

rmartin16 commented 3 months ago

This is definitely an early POC of this idea to solicit feedback on the general approach.

Please let me know your high-level thoughts as well as workflows this should support :)

freakboy3742 commented 3 months ago

This is definitely an early POC of this idea to solicit feedback on the general approach.

tl;dr - I like what I see here.

  • Strategy

    • The basic idea is checking whether a piece of metadata changed and taking appropriate action

    • So, if the modified datetime for any source file changed, then source should be updated

... or, if there's an entirely new file, but existing files haven't changed. I think this will be caught by the code you've got here - but I'm not sure if there's an edge case where a file with an old modification date is moved into a directory will be picked up on all platforms. The real safe option would be a hash of all source files that would be included.

* Or, if the requirements changed, the requirements should be re-installed

"Change" here gets a bit hairy if the requirement is a reference to a file on disk. Again, we sort of need the hash of referenced sources.

  • This approach lends itself pretty well to the "automatically do a thing Briefcase already supports"....but this will be more difficult for less obvious changes

    • For instance, when an app's version changes, the solution right now is to re-run briefcase create...but as laid out in Update templated content after initial call to create #472, it may be better to only update the files from the template
    • I'm not really sure if this trouble is worth it, tbh.....over just asking the user if they want the app automatically recreated

If the first pass is no more than "You've updated X, you may need to recreate" warning message, it would be a major improvement to developer ergonomics.

We could then address specific updates to templated content individually - e.g., we could introduce something that updates just AndroidManifest.xml on a permission change (because XML parsing is a solved problem) without addressing the update of the Gradle file on a library dependency change (because that requires a read-write parser for groovy format that I doubt exists for Python).

  • Database

    • The "database" is a simple JSON file of key/value pairs

    • I would have used TOML....but we don't have a writer available;

Yes we do - tomli-w is an existing dependency.

  • Each output format has its own database in the build/<app name>/<platform>/<format> directory

    • It could also live in a single file....but it seems more convenient to be able delete the database for a single format

Agreed, especially since it's tied to a specific rollout of a project. If I rm -rf build, I don't want to have to also purge a database file.

  • Implementation

    • Commands

    • Unless, I'm entirely overlooking a simpler approach, I think this will basically require augmenting the beginning of relevant commands to check the metadata and ensure the appropriate are or will be taken

I can't think of any other obvious place; if the POC here is any indication, I don't think it's especially onerous. Adding a couple of extra calls to top-level commands isn't an overhead or complexity that concerns me.

* I've quickly implemented this for the `build` command for updated source and/or requirements

Please let me know your high-level thoughts as well as workflows this should support :)

As I said at the beginning - this looks like a solid start, broadly in line with what I would have expected to develop if I'd written this myself.

A couple of additional edge cases and usages that I noticed:

  1. Changes in Python version. It's not a common occurrence, but if you're switching between virtual environments, its easy to end up in a situation where you've installed a support package for 3.X and then try to run with 3.Y.
  2. Differences in sensitivity between metadata keys. For example, adding or changing a long_description won't impact an iOS project because the template doesn't use that value, so it doesn't require a rebuild. That said, third party templates could use the key... so maybe we just need to be over-vigilant here.
  3. Difference in the response to different metadata keys. Longer term the response to a change in permissions is different to the response to a change in library dependencies; plus, there are an increasing number of platform specific keys that might have platform-specific "update" possibilities.
  4. Dev mode requirements. We currently run dev -r on first run, as judged by whether the .dist-info file for the app exists. It would be nice to include changes in requirements as part of that trigger; but this does impact on where the metadata is stored, and what it's keyed on (since dev mode dependencies are dependent on the virtual environment that is active)
rmartin16 commented 3 months ago

The real safe option would be a hash of all source files that would be included.

"Change" here gets a bit hairy if the requirement is a reference to a file on disk. Again, we sort of need the hash of referenced sources.

I agree...but I'm a little nervous about apps or requirements with large files. Reading in hundreds or thousands of megabytes and hashing them could be quite slow on some systems...and at least a noticeable pause on fast systems. That suggests an option to disable it will be necessary. Alternatively, we could split the difference and maybe hash a listing of the files and their timestamps.

Yes we do - tomli-w is an existing dependency.

Whoops! Thanks for reminding me.

freakboy3742 commented 3 months ago

The real safe option would be a hash of all source files that would be included.

"Change" here gets a bit hairy if the requirement is a reference to a file on disk. Again, we sort of need the hash of referenced sources.

I agree...but I'm a little nervous about apps or requirements with large files. Reading in hundreds or thousands of megabytes and hashing them could be quite slow on some systems...and at least a noticeable pause on fast systems. That suggests an option to disable it will be necessary.

Absolutely agreed. If this is a check that needs to run on every run, it needs to be able to complete very quickly, even on projects with a large number of files, or projects with individually large files.

Alternatively, we could split the difference and maybe hash a listing of the files and their timestamps.

Yeah - if adding a file doesn't alter the modification date of a directory, a diff/hash on the list of files should be sufficient.

mhsmith commented 3 months ago

Git has to solve a similar problem, and I believe the way it works is to only re-check a file's hash if its timestamp or size has changed.

rmartin16 commented 3 months ago

4. Dev mode requirements. We currently run dev -r on first run, as judged by whether the .dist-info file for the app exists. It would be nice to include changes in requirements as part of that trigger; but this does impact on where the metadata is stored, and what it's keyed on (since dev mode dependencies are dependent on the virtual environment that is active)

If I allow the Command to control where the build tracking database lives, then I can have the DevCommand store the database in the dist-info directory while the others Command use the output format build directory. Using the dist-info directory may be taking to much liberty...in which case, the root of the project if probably the only other candidate.

Git has to solve a similar problem, and I believe the way it works is to only re-check a file's hash if its timestamp or size has changed.

Great point; thanks.

freakboy3742 commented 3 months ago
  1. Dev mode requirements. We currently run dev -r on first run, as judged by whether the .dist-info file for the app exists. It would be nice to include changes in requirements as part of that trigger; but this does impact on where the metadata is stored, and what it's keyed on (since dev mode dependencies are dependent on the virtual environment that is active)

If I allow the Command to control where the build tracking database lives, then I can have the DevCommand store the database in the dist-info directory while the others Command use the output format build directory. Using the dist-info directory may be taking to much liberty...in which case, the root of the project if probably the only other candidate.

I'd lean towards a hidden folder in the project root (or maybe somewhere in the venv's share/data folder?), rather than trying to cram this into .dist-info. If we ever need to add more than a single tracking file (which seems possible, if we wanted to add alternate dev modes for handling web or mobile), it would be nice to have somewhere to put that content.

rmartin16 commented 3 months ago

I'd lean towards a hidden folder in the project root (or maybe somewhere in the venv's share/data folder?),

Ok; I'll finalize the location along with the format of the file as I keep working on this.

Additionally, I reworked how the Commands call each other today to better support this more intuitive behavior.

Previously, the build, run, and package Commands made their own assessments about whether to call other Commands. Now, they always call the prerequisite commands and let them decide if they need to run.

Therefore, BuildCommand always calls UpdateCommand; in this way, BuildCommand can force UpdateCommand to update parts of the app based on command-line arguments but UpdateCommand can also assess the situation and make any necessary updates. If updates are made, this is passed back to BuildCommand via state so it can re-build the app.

Similarly, RunCommand always calls BuildCommand and it decides if the app needs to be built because it hasn't ever been built or defers to UpdateCommand for whether updates are necessary (and builds the app if updates are made). PackageCommand behaves in a similar way.

Furthermore, I've also made --no-update behave more like how it's documented; if a user specifies --no-update, then it overrides all this behavior and skips updating.

freakboy3742 commented 3 months ago

Previously, the build, run, and package Commands made their own assessments about whether to call other Commands. Now, they always call the prerequisite commands and let them decide if they need to run.

That definitely looks like an elegant cleanup.

The only note I've got is whether the state tracking would benefit from being more thorough - tracking all the individual components of is_app_updated, in addition to the overall flag - i.e., record if the template was rolled out, and a support package was installed, and resources were installed. I don't know that I have a specific use case for this, but if the data is readily available, is there any reason to not track it?

(Also, in true bike shed territory - the specific naming "is_app_updated" seems unwieldy to me. The "is" prefix doesn't add anything, and what would be updating other than an app?)

Furthermore, I've also made --no-update behave more like how it's documented; if a user specifies --no-update, then it overrides all this behavior and skips updating.

This is definitely a weird edge case. The intention was to prevent any automated update - that is, updates that happen just because you're running the command. It's only an option on run, and the only automated update occurred when in test mode, on the basis that tests should always run on the most recent code.

The question that your change introduces: What does briefcase run -u --no-update or briefcase run -r --no-update mean?

Previously, it would update - because you've explicitly requested an update. briefcase run --test -u --no-update would have been ambiguous, but would have updated because the update request was explicit.

The new behaviour will prevent the update in both cases, but won't raise any error.

At the very least, there's a need for a check to warn the user that they've asked for something contradictory. However, I wonder if maybe the better fix here is to drop the "implicit update" as part of test, and just expect that "normal" testing is briefcase run --test -u. That would remove the need for --no-update entirely.

rmartin16 commented 3 months ago

What does briefcase run -u --no-update or briefcase run -r --no-update mean?

At the very least, there's a need for a check to warn the user that they've asked for something contradictory.

I'll need to assess all this more fully....but a different perspective may not consider this as confusing. In a lot of CLI apps I've used, they allow for negating previously specified arguments; so, one could consider --no-update to mean "regardless of what I (or a default command I'm trying to override) specified, do not update anything about the build." I'm not necessarily advocating for this atm....but it is at least reasonable, I think.

freakboy3742 commented 3 months ago

I'll need to assess all this more fully....but a different perspective may not consider this as confusing.

For sure - interpretation matters a lot here. Making the situation impossible is the best option; but if it's unavoidable, there's no good answer - just an edge case you can document and/or warn about.

If we're keeping both, I guess I might lean toward "no" being the interpretation on the basis that it's better to be non-destructive when there's ambiguity - but that's a very weakly held opinion.

rmartin16 commented 3 months ago

Furthermore, I've also made --no-update behave more like how it's documented; if a user specifies --no-update, then it overrides all this behavior and skips updating.

This is definitely a weird edge case. The intention was to prevent any automated update - that is, updates that happen just because you're running the command. It's only an option on run, and the only automated update occurred when in test mode, on the basis that tests should always run on the most recent code.

After reviewing this and the history more closely, I understand better what you're saying: that is, when --test was introduced, along with causing the test suite to run, it would also have Briefcase automatically update the app source in the bundle....and if you wanted to avoid that update, you could use --no-update. Without the --test option, --no-update doesn't apply; this is definitely clear in the RTD docs.

To that end, I would argue I am trying to change this paradigm of operation for Briefcase.

Currently, after Briefcase initially rolls out the template and completes a build of the bundle, it will not update the bundle for subsequent commands without explicit instruction from the user via one of the command-line switches (unless, of course, the bundle is deleted).

With the changes proposed here, Briefcase will instead (strive to) always keep the bundle up-to-date with the sources of truth and will require explicit instruction from the user to run a command with a stale bundle.

In many ways, I think this is the most intuitive behavior for Briefcase to assert by default. That is, when users make updates to their project, Briefcase should automatically incorporate those changes and only ignore them with explicit instruction. I think most users are expecting Briefcase to run their current app when they enter briefcase run....not a version of it from the past.

Towards this end, instead of getting rid of the --no-update switch, one might argue the other direction and instead get rid of the --update-* switches because Briefcase will automatically apply them as necessary.

Admittedly, I didn't really set out with this thought process per se...but I feel like this inversion of behavior makes sense in the long term. I am interested to hear your thoughts given these explicit user instructions certainly appear intentional...so, I may be overlooking larger principles recommending their use even in the presence of mechanisms to automatically apply them.

The question that your change introduces: What does briefcase run -u --no-update or briefcase run -r --no-update mean?

Previously, it would update - because you've explicitly requested an update.

FWIW, using main, both of these commands error with Cannot specify both --update[-requirements] and --no-update

briefcase run --test -u --no-update would have been ambiguous, but would have updated because the update request was explicit.

Similarly, using main, this command errors with Cannot specify both --update and --no-update

The new behaviour will prevent the update in both cases, but won't raise any error.

At the very least, there's a need for a check to warn the user that they've asked for something contradictory.

I think --no-update should win out in this case but I wouldn't protest adding a warning message.

freakboy3742 commented 3 months ago

With the changes proposed here, Briefcase will instead (strive to) always keep the bundle up-to-date with the sources of truth and will require explicit instruction from the user to run a command with a stale bundle.

In many ways, I think this is the most intuitive behavior for Briefcase to assert by default.

I agree; the historical behavior has been mostly driven by the fact that we didn't have an ability to track changes that needed a template-level update.

Towards this end, instead of getting rid of the --no-update switch, one might argue the other direction and instead get rid of the --update-* switches because Briefcase will automatically apply them as necessary.

I agree, with my only qualifying comment being is that the "no update needed" check needs to be quick. We could run update-requirements and update-resources on every run today - but the no-op requirements check still takes a couple of seconds. The metadata-based approach you're working on here should achieve that, AFAICT.

It also means that the update command itself is almost unneeded - except for the Xcode/Visual studio (and, I guess, Gradle in Android Studio) case where you're using those environments to run/debug the project.

The question that your change introduces: What does briefcase run -u --no-update or briefcase run -r --no-update mean? Previously, it would update - because you've explicitly requested an update.

FWIW, using main, both of these commands error with Cannot specify both --update[-requirements] and --no-update

I forgot that we added that check. It's somewhat of a moot point if explicit --update flags are being deprecated; but a hard error is better than a warning.

rmartin16 commented 3 months ago

I was working through the "did any files change in this directory" algorithm today.

Ultimately, if we want to know if any file experienced some sort of change, we'll need to track each file and its metadata in the tracking database. (Although, I suppose if we wanted to avoid writing all that information to the tracking database, we could create a hash of the metadata itself for all of the files.)

As for the specific pieces of metadata, I'm not actually sure it will be valuable to create a hash of each file?

To allow this algorithm to stand a chance to run quickly, we can't verify the hash of the file content hasn't changed each time. So, we're left comparing the pieces of metadata that are quick to retrieve...and if any of those have changed, we can assume that file has been updated. From there, calculating a hash only when other metadata has changed doesn't seem to have much value to me....unless we're going to ignore those metadata changes if the hash still matches....but I'm not sure that's appropriate.

So, what I'm thinking at this point is:

I think this accomplishes our goal of detecting changes. Fro something like Git, I think it needs to go one step further to hash the file because it would need to know whether to create a new object in its database to hold the file's current content or not.

So, please let me know if you see holes in this algorithm.

I think the immediate one that comes to mind is "what if the file changes but the metadata doesn't?" This kinda feels like "what if I create a hash collision?" Well, it's possible...but that's really unlikely to happen in normal workflows...unless that's specifically what you're trying to do. At any rate, the only way to detect this situation would be to calculate a hash...but that brings us full circle to running a hash in the critical path of this algorithm....and we can't...

freakboy3742 commented 3 months ago

So, please let me know if you see holes in this algorithm.

That seems fine to me. It will definitely catch the obvious cases, and if there's any common patterns to the non-obvious cases, they should show up soon enough.

I think the immediate one that comes to mind is "what if the file changes but the metadata doesn't?"

Yeah - that's definitely an edge case I think we can live with.

The only other thought I've had is to make this someone else's problem: tools like watchdog implement a lot of this functionality, with the benefit that someone else is maintaining it and keeping on top of all the weird edge-cases that exist with filesystems etc.

rmartin16 commented 3 months ago

So, please let me know if you see holes in this algorithm.

That seems fine to me. It will definitely catch the obvious cases, and if there's any common patterns to the non-obvious cases, they should show up soon enough.

I think the immediate one that comes to mind is "what if the file changes but the metadata doesn't?"

Yeah - that's definitely an edge case I think we can live with.

I guess not so surprisingly...understanding if something changed inside a directory is becoming full of corner cases.

If we just consider the app sources, this should be mostly straightforward since these directories should just contain Python modules and assets for the app.

However, the situation becomes much more fraught for local app requirements. For whatever reason, building the sdist updates the modified datetime for the top-level directory of the requirement. So, ok...we can exclude considering the top-level directory, I guess, since what we really care about is the contents anyway. But what about the state of Git for the requirement? If the user runs git fetch where nothing meaningful is changed, should the requirement be reinstalled? To that end, we can ignore the .git directory and anything in it....probably same for __pycache__ directories.

But this kinda feels like the tip of an iceberg... That said, I think the use of python -m build to create the sdist helps limit the scope of this behavior....or at least I'm hoping it does. My understanding is that python -m build will create the sdist in isolation; therefore, I'm hoping that means building the requirement won't do things like create a build directory inside the requirement when the sdist is built. If that's true, I think that limits any extravagant exclusion rules this logic would need.

The only other thought I've had is to make this someone else's problem: tools like watchdog implement a lot of this functionality, with the benefit that someone else is maintaining it and keeping on top of all the weird edge-cases that exist with filesystems etc.

So, I ended up writing a quick diff utility for our purposes before I properly looked at watchdog...but I wish I realized watchdog has "directory snapshot" support before I did all that. Its ability to snapshot a directory is basically a more battle-hardened version of what I wrote. They also provide a way to directly compare two snapshots for equality.

However...it isn't all roses; the "snapshot" that's returned is a pseudo-dictionary of filepaths mapped to their metadata. The metadata, though, is a os.stat_result object and will require special serialization in to TOML. Even if it's trivial, though, watchdog doesn't support a mechanism to create a "snapshot" from anything except a filesystem path.

Therefore, using watchdog, I see two options:

I've implemented the hash method for now. Open to ideas/thoughts.

[EDIT] However, when briefcase dev installs local requirements, it just uses pip install /path/to/req....so, this can definitely create/update all sorts of files in the requirement directory...

[EDIT EDIT] I did realize part of my issue is that I'm only evaluating whether the directories changed at the beginning of the Briefcase command. If, instead, I evaluate the directories at the beginning and the end of the command, it can basically ignore changes to the directories that are a result of Briefcase's actions.

freakboy3742 commented 3 months ago

I guess not so surprisingly...understanding if something changed inside a directory is becoming full of corner cases.

If we just consider the app sources, this should be mostly straightforward since these directories should just contain Python modules and assets for the app.

However, the situation becomes much more fraught for local app requirements.

I'm comfortable being a little over-eager on this. As long as the case of "I didn't touch a thing" returns as "no change", I'm OK with an empty git updates or __pycache__ updates returning as "a change".

However...it isn't all roses; the "snapshot" that's returned is a pseudo-dictionary of filepaths mapped to their metadata. The metadata, though, is a os.stat_result object and will require special serialization in to TOML. Even if it's trivial, though, watchdog doesn't support a mechanism to create a "snapshot" from anything except a filesystem path.

Therefore, using watchdog, I see two options:

  • Store the directory snapshots as pickles so they can persist between runs of Briefcase

    • This is relatively straightforward but creates a lot of overhead for managing these pickle files
    • The need for file exclusion rules will also complicate this because watchdog's built-in support to compare two snapshots doesn't have any filtering support

Also - it involves pickles, which... . Aside from the security implications of objects that preserve executable state, we then have to deal with potential pickle version incompatibilities. Suffice to say I'll go to great lengths to avoid using pickles.

  • Create a hash of the directory snapshot to store in the tracking database

    • This is much more straightforward; basically just capture a hash of all the os.stat_result objects
    • However, we obviously lose any fidelity of the snapshot and couldn't do something like determine the files that changed between the two snapshots....but this information doesn't seem especially useful.
    • Creating the hash is also a little tricky since you need to ensure metadata for each file is incorporated in to the hash the same way each time its calculated.

I've implemented the hash method for now. Open to ideas/thoughts.

The hash approach definitely sounds sufficient to me. I can't think of any reason we need to do a deep diff - we just need to know if a change has occurred at all.

The only edge case I can think of is whether it's sensitive to file ordering at all - i.e., If the OS returns the same file list but in a different order, does that evaluate as a filesystem change? I'm not sure if that's a problem in practice (or even how you'd evaluate if it happens...), but it's worth poking around to confirm.

[EDIT] However, when briefcase dev installs local requirements, it just uses pip install /path/to/req....so, this can definitely create/update all sorts of files in the requirement directory...

It might be worth switching local file references to a 2-pass "build wheel/install wheel" approach. The web backend currently does this (because wheels are required for distribution), but building a wheel cache, using a build directory that isn't in the package's source folder should avoid the "filesystem change" issue.

rmartin16 commented 3 months ago

The only edge case I can think of is whether it's sensitive to file ordering at all - i.e., If the OS returns the same file list but in a different order, does that evaluate as a filesystem change? I'm not sure if that's a problem in practice (or even how you'd evaluate if it happens...), but it's worth poking around to confirm.

When I first implemented this, I couldn't understand why it always created a new hash for the directory...even when watchdog did not find changes with its "directory snapshot diff" support. It was, in fact, the ordering of the paths to create the hash; the metadata for each path must be incorporated in to a hash in the same order each time the hash is created. After that, the hash was reliably consistent when a directory hadn't changed.

rmartin16 commented 3 months ago

As a preliminary speed test, I created a sources directory with 100,000 files in it. On a beefy system using an SSD, it calculated the hash for that directory in about 0.9 seconds. On my Raspberry Pi 4 running off an SD card, it took about 3 seconds.

freakboy3742 commented 3 months ago

As a preliminary speed test, I created a sources directory with 100,000 files in it. On a beefy system using an SSD, it calculated the hash for that directory in about 0.9 seconds. On my Raspberry Pi 4 running off an SD card, it took about 3 seconds.

0.9s for 100k files sounds acceptable to me - most projects should be a lot less than that. What's the timing on a "just the Toga template" project?

Just thinking about ways a speed test might be misleading - is that 100k files in a single directory, or split across lots of directories? Directory traversal could impact on speed...

rmartin16 commented 3 months ago

What's the timing on a "just the Toga template" project?

My main machine reads it in a few hundredths of a second or so.

Just thinking about ways a speed test might be misleading - is that 100k files in a single directory, or split across lots of directories? Directory traversal could impact on speed...

Just in a single directory. I'll definitely need to consider some more varied scenarios. My hope is that since we're just reading metadata, access to it is comparability optimized by the file system and OS over accessing file content itself....at least for modern systems making repeated calls for the same metadata.

freakboy3742 commented 2 months ago

See #1733 for a related edge case that might need to be detected: has the venv itself changed? The datestamp on the python executable might be a reasonable proxy for this.

rmartin16 commented 2 months ago

See #1733 for a related edge case that might need to be detected: has the venv itself changed? The datestamp on the python executable might be a reasonable proxy for this.

Yeah; I've been thinking about how to detect this as well. Initially, the modiified datetime of the python exe wasn't useful...until I told os.stat() not to follow the symlinks....since otherwise, they could very well be the same file even if the virtual environment was new/different.

Along with this, I've implemented the heavy hitters for this so far:

Complications

Open to high-level feedback on what's been completed so far if you're interested. Still plenty of debugging code and refinements necessary, though.

Next step is figuring out a system for detecting arbitrary metadata changes while allowing the commands to have different sensitivities to those changes.

freakboy3742 commented 2 months ago

Along with this, I've implemented the heavy hitters for this so far:

  • Briefcase version changes

How sensitive is this to dev commit-level "version change" updates?

  • App resources change
    • AFAICT, the app resources directories must be included in sources; so, if sources change, then the app resources are re-installed as well

So - this might be a terminology problem. Anything in sources will be copied in, but that's covered as part of a normal update. Resources currently only refers to icons and splash screens. The locations of those files are predictable, but not literally expanded. You'd need to check both (a) the value of the icon setting, and (b) the datestamp on any file that is implicitly referenced by the icon setting.

  • Briefcase deferring to the build system
    • For what's been implemented so far, this is most noticeable for requirements for Android, Flatpak, and Web since Briefcase just creates a requirements.txt

Would there be any impact to always deferring? It's clearly needed for the Android et al cases, but what is stopping us from deferring the tracking database for the "default" case? Is there an case where those builds need "current" build data?

rmartin16 commented 2 months ago

Along with this, I've implemented the heavy hitters for this so far:

  • Briefcase version changes

How sensitive is this to dev commit-level "version change" updates?

None at all; only the base version of the Briefcase version is tracked.

  • App resources change

    • AFAICT, the app resources directories must be included in sources; so, if sources change, then the app resources are re-installed as well

So - this might be a terminology problem. Anything in sources will be copied in, but that's covered as part of a normal update. Resources currently only refers to icons and splash screens. The locations of those files are predictable, but not literally expanded. You'd need to check both (a) the value of the icon setting, and (b) the datestamp on any file that is implicitly referenced by the icon setting.

hmm...ok; but is there a requirement that the resources live inside a directory specified in sources? Or could you specify files in completely arbitrary locations? If not, updating the resources will always trigger an app sources update. Alternatively, collecting all the resource file paths together and calculating a hash for the files metadata wouldn't be that hard...

  • Briefcase deferring to the build system

    • For what's been implemented so far, this is most noticeable for requirements for Android, Flatpak, and Web since Briefcase just creates a requirements.txt

Would there be any impact to always deferring? It's clearly needed for the Android et al cases, but what is stopping us from deferring the tracking database for the "default" case? Is there an case where those builds need "current" build data?

If we defer tracking each step for all builds, we lose any fidelity for tracking any successful independent tasks for a build failure. For instance, if Briefcase completes task A and B but C fails, deferred tracking wouldn't be able to know this....unless we added an intermediate level of tracking when the tasks are completed but adding to the tracking database is the deferred part I guess.....but then why defer at all?

I might have tunnel vision at this point; so, please let me know if you're imagining something else.

freakboy3742 commented 2 months ago

Along with this, I've implemented the heavy hitters for this so far:

  • Briefcase version changes

How sensitive is this to dev commit-level "version change" updates?

None at all; only the base version of the Briefcase version is tracked.

We might want to include the "dev" part (but not the dev number) in the marker that is used here - When Briefcase goes from 0.3.18dev1234 to 0.3.18, we probably want to flag that as notable update.

  • App resources change

    • AFAICT, the app resources directories must be included in sources; so, if sources change, then the app resources are re-installed as well

So - this might be a terminology problem. Anything in sources will be copied in, but that's covered as part of a normal update. Resources currently only refers to icons and splash screens. The locations of those files are predictable, but not literally expanded. You'd need to check both (a) the value of the icon setting, and (b) the datestamp on any file that is implicitly referenced by the icon setting.

hmm...ok; but is there a requirement that the resources live inside a directory specified in sources? Or could you specify files in completely arbitrary locations? If not, updating the resources will always trigger an app sources update. Alternatively, collecting all the resource file paths together and calculating a hash for the files metadata wouldn't be that hard...

There's no requirement that they be in sources - in fact, the opposite (they shouldn't be in the sources folder) is better behavior, because you don't want the plethora of Android images to be part of your macOS app payload.

The distinction is currently blurred because the default icon setting points at src/<app_name>/resources/<app_name>, but I'm proposing this be removed in beeware/briefcase-template#111.

  • Briefcase deferring to the build system

    • For what's been implemented so far, this is most noticeable for requirements for Android, Flatpak, and Web since Briefcase just creates a requirements.txt

Would there be any impact to always deferring? It's clearly needed for the Android et al cases, but what is stopping us from deferring the tracking database for the "default" case? Is there an case where those builds need "current" build data?

If we defer tracking each step for all builds, we lose any fidelity for tracking any successful independent tasks for a build failure. For instance, if Briefcase completes task A and B but C fails, deferred tracking wouldn't be able to know this....unless we added an intermediate level of tracking when the tasks are completed but adding to the tracking database is the deferred part I guess.....but then why defer at all?

I might have tunnel vision at this point; so, please let me know if you're imagining something else.

I guess it comes down to the complexity of having 2 different flavours of tracking to support the Android/Flatpak case. In your example with a failed C, it's obviously preferable that only C is done on the next pass, but I don't have an issue with the next build requiring A and B be repeated if the overhead/complexity of managing a more granular state is high, or managing state in a way that is compatible with the Android/Flatpak case requires 2 significantly different implementations.

rmartin16 commented 2 months ago

If we defer tracking each step for all builds, we lose any fidelity for tracking any successful independent tasks for a build failure. For instance, if Briefcase completes task A and B but C fails, deferred tracking wouldn't be able to know this....unless we added an intermediate level of tracking when the tasks are completed but adding to the tracking database is the deferred part I guess.....but then why defer at all? I might have tunnel vision at this point; so, please let me know if you're imagining something else.

I guess it comes down to the complexity of having 2 different flavours of tracking to support the Android/Flatpak case. In your example with a failed C, it's obviously preferable that only C is done on the next pass, but I don't have an issue with the next build requiring A and B be repeated if the overhead/complexity of managing a more granular state is high, or managing state in a way that is compatible with the Android/Flatpak case requires 2 significantly different implementations.

That makes sense. Although, this really only applies to installing app requirements; everything else has Briefcase install it to build for the actual build system to use. So, the added complexity to account for this specific difference is rather marginal.

That said, tracking at the "command level" instead of "command task level" could create a cleaner implementation....insofar as there wouldn't be tracking code littered throughout the implementation of the Command. Instead, tracking could just happen at the beginning and end of each Command. This argument might sway me to track only at successful command completion.

Interestingly, though, installing app requirements offers another complication here: when Briefcase install requirements, it first deletes anything that may be already installed; so, if the build command fails, we'd be forced to clear the current tracking and force re-installation of requirements (ir)regardless the next time to ensure the requirements are actually installed.