Closed MahmoudMabrok closed 1 week ago
The Scribe team will do our best to address your contribution as soon as we can. The following is a checklist for maintainers to make sure this process goes as well as possible. Feel free to address the points below yourself in further commits if you realize that actions are needed :)
If you're not already a member of our public Matrix community, please consider joining! We'd suggest using Element as your Matrix client, and definitely join the General and Android rooms once you're in. Also consider joining our bi-weekly Saturday dev syncs. It'd be great to have you!
Thanks for this, @MahmoudMabrok! Let me know when we're all ready for a review 😊
It is ready but step of jacoco keep failing. Once fix will make it ready for review.
Looping in @angrezichatterbox for the review here :) Thanks for the work here, @MahmoudMabrok!
One thing that I'm wondering as well, do we want all of this in one job? Would it not be a bit easier for people to respond to the issues if they could compartmentalize the errors that they're getting into linting/detekt in one as well as other types in others? Might be a bit overwhelming if it's all in one, whereas the other we could say "Please start with the linting errors here and then we'll move to the others".
Feedback on this is welcome!
Thanks for the PR, @MahmoudMabrok—everything looks great! I do have one suggestion: would it be better to handle caching in a separate job alongside the JDK environment setup, rather than configuring it in both jobs?
We need to br done in same job as lint, detect generate files that needs to be cached for faster run so if we split it we will lose this files.
Or you mean to make it as composite action so we reuse it inside each job if this so we can handle it on separate PR.
One thing that I'm wondering as well, do we want all of this in one job? Would it not be a bit easier for people to respond to the issues if they could compartmentalize the errors that they're getting into linting/detekt in one as well as other types in others? Might be a bit overwhelming if it's all in one, whereas the other we could say "Please start with the linting errors here and then we'll move to the others".
Feedback on this is welcome!
Let me try to explain my point of view might things be clearer.
Using it in single job helps in lower time as overall as we do not do checkout, install java, Gradle and some initialization also some steps are faster when run in sequence.
For PR owner, when they got issue in lint so they will get lint step as error so they will fix it, then they will push again so lint will run again and if pass will check other steps, what if we use separate job, it has advantage of it will show all stages that passed or failed but let's say PR has issue of lint, so lint will fail and other will success then when PR owner will push again which will run all jobs again even there were passed.
@andrewtavis @angrezichatterbox is it clear now.
Thanks for the PR, @MahmoudMabrok—everything looks great! I do have one suggestion: would it be better to handle caching in a separate job alongside the JDK environment setup, rather than configuring it in both jobs?
We need to br done in same job as lint, detect generate files that needs to be cached for faster run so if we split it we will lose this files.
Or you mean to make it as composite action so we reuse it inside each job if this so we can handle it on separate PR.
I meann't we could have a job that would do the JDK install and caching? And let the other jobs depend on it. Could this be possible @MahmoudMabrok
Thanks for the PR, @MahmoudMabrok—everything looks great! I do have one suggestion: would it be better to handle caching in a separate job alongside the JDK environment setup, rather than configuring it in both jobs?
We need to br done in same job as lint, detect generate files that needs to be cached for faster run so if we split it we will lose this files. Or you mean to make it as composite action so we reuse it inside each job if this so we can handle it on separate PR.
I meann't we could have a job that would do the JDK install and caching? And let the other jobs depend on it. Could this be possible @MahmoudMabrok
We could do but it will cache only Gradle which won't add benefits for us.
But we can make it as composite action and re use it.
Thanks for the PR, @MahmoudMabrok—everything looks great! I do have one suggestion: would it be better to handle caching in a separate job alongside the JDK environment setup, rather than configuring it in both jobs?
We need to br done in same job as lint, detect generate files that needs to be cached for faster run so if we split it we will lose this files. Or you mean to make it as composite action so we reuse it inside each job if this so we can handle it on separate PR.
I meann't we could have a job that would do the JDK install and caching? And let the other jobs depend on it. Could this be possible @MahmoudMabrok
The issue that jobs not share data with each other's.
For PR owner, when they got issue in lint so they will get lint step as error so they will fix it, then they will push again so lint will run again and if pass will check other steps, what if we use separate job, it has advantage of it will show all stages that passed or failed but let's say PR has issue of lint, so lint will fail and other will success then when PR owner will push again which will run all jobs again even there were passed.
So, as I understand it, even in this setup, the entire job (or each individual job) re-runs from the beginning when a failure occurs and a new change is pushed by the user. Am I correct?
For PR owner, when they got issue in lint so they will get lint step as error so they will fix it, then they will push again so lint will run again and if pass will check other steps, what if we use separate job, it has advantage of it will show all stages that passed or failed but let's say PR has issue of lint, so lint will fail and other will success then when PR owner will push again which will run all jobs again even there were passed.
So, as I understand it, even in this setup, the entire job (or each individual job) re-runs from the beginning when a failure occurs and a new change is pushed by the user. Am I correct?
it will re-run, but it will be caching generated steps so subsequent builds will be less in time.
as you see here the pipeline checks runs in only 1m , here as old run is 2m and so on.
For PR owner, when they got issue in lint so they will get lint step as error so they will fix it, then they will push again so lint will run again and if pass will check other steps, what if we use separate job, it has advantage of it will show all stages that passed or failed but let's say PR has issue of lint, so lint will fail and other will success then when PR owner will push again which will run all jobs again even there were passed.
I'm ultimately ok with the latter version of this and would like the jobs split. In thinking of developer experience, especially in a project that needs to be welcoming to early stage developers, pushing to a repo and getting errors that you fix only to push and get more errors isn't something that will engender confidence in potential contributors. Yes the jobs individually will take longer, but the goal is also that they will be run fewer times as all changes can be fixed in one go. This also means less notifications for maintainers in the hopes that changes will be fixed in fewer commits. It also makes the conversation with a contributor much easier as we would be able to direct them to specific errors in a way that we can confidently say that fixes of say ktlint or detekt errors are all that's needed for the PR to be merged.
Appreciate the conversation here, and hope this makes sense!
I got your point, @andrewtavis .
I did changes, but pipeline failed duo to #243 which is not related to current changes.
@andrewtavis could you check workflow, so if there is changes I would work on it.
they splitted into different jobs now, also we did a composite action to be reusable.
Contributor checklist
./gradlew lintKotlin detekt test
command as directed in the testing section of the contributing guideDescription
Related issue
196