Closed atanass closed 4 months ago
I've done some testing and it seems like it's working relatively well. Evidence below:
Pipeline structure with 2 message threads (1 for building & 1 for deploying)
Messages in Slack:
CircleCI config snippet:
version: 2.1
orbs:
slack: nukengprod/ecr@dev:slack
executors:
default:
docker:
- image: cimg/base:2023.10
jobs:
test:
executor: default
steps:
- run: echo testing
build:
executor: default
steps:
- run: echo building
deploy:
executor: default
steps:
- run: echo deploying
slack:
executor: default
parameters:
finished:
type: boolean
default: false
phase:
type: string
steps:
- when:
condition: << parameters.finished >>
steps:
- run:
name: Prepare slack template
command: |
echo "{\"blocks\": [{\"type\": \"section\",\"text\": {\"type\": \"plain_text\",\"text\": \"Phase << parameters.phase >> has been finished.\",\"emoji\": true}}]}" > template.json
echo "export SLACK_PIPELINE_DEPLOY_TEMPLATE=\"\$(cat template.json)\"" >> $BASH_ENV
- unless:
condition: << parameters.finished >>
steps:
- run:
name: Prepare slack template
command: |
echo "{\"blocks\": [{\"type\": \"section\",\"text\": {\"type\": \"plain_text\",\"text\": \"Phase << parameters.phase >> has been started.\",\"emoji\": true}}]}" > template.json
echo "export SLACK_PIPELINE_DEPLOY_TEMPLATE=\"\$(cat template.json)\"" >> $BASH_ENV
- slack/notify:
channel: "#atanas-test"
template: SLACK_PIPELINE_DEPLOY_TEMPLATE
event: always
thread: << parameters.phase >>
debug: true
workflows:
version: 2
test:
jobs:
- slack:
name: pre-build
phase: build
- build:
requires:
- pre-build
- slack:
name: post-build
phase: build
finished: true
requires:
- build
- slack:
name: pre-deploy
phase: deployment
requires:
- build
- deploy:
requires:
- pre-deploy
- slack:
name: post-deploy
phase: deployment
finished: true
requires:
- deploy
Test with 3 consequent messages in a thread. Snippet:
version: 2.1
orbs:
slack: nukengprod/ecr@dev:slack
executors:
default:
docker:
- image: cimg/base:2023.10
jobs:
test:
executor: default
steps:
- run: echo testing
build:
executor: default
steps:
- run: echo building
deploy:
executor: default
steps:
- run: echo deploying
slack:
executor: default
parameters:
finished:
type: boolean
default: false
phase:
type: string
steps:
- run:
name: Prepare slack template
command: |
echo "{\"blocks\": [{\"type\": \"section\",\"text\": {\"type\": \"plain_text\",\"text\": \"Deployment process: << parameters.phase >>.\",\"emoji\": true}}]}" > template.json
echo "export SLACK_PIPELINE_DEPLOY_TEMPLATE=\"\$(cat template.json)\"" >> $BASH_ENV
- slack/notify:
channel: "#atanas-test"
template: SLACK_PIPELINE_DEPLOY_TEMPLATE
event: always
thread: build
workflows:
version: 2
test:
jobs:
- slack:
name: pre-build
phase: start
- build:
requires:
- pre-build
- slack:
name: build-in-progress
phase: "in progress"
requires:
- pre-build
- slack:
name: post-build
phase: finished
requires:
- build
Pipeline:
Messages:
Thank you for taking the time to look into this @aaronstillwell! I'd like to make it as clear, simple and robust as possible for users. Looking forward for any further recommendations and reviews.
Hi, apologies for the long delay in response, do you think it'd be possible to use save_cache and restore_cache instead of workspaces? The reason for this is that if people are already using workspaces it could get them in a wonky state, for one, and for bigger workspaces, attaching twice can be a pretty significant operation.
The way caches can be specified by key means it could be much more robust. Thanks for the PR!
Hi, apologies for the long delay in response, do you think it'd be possible to use save_cache and restore_cache instead of workspaces? The reason for this is that if people are already using workspaces it could get them in a wonky state, for one, and for bigger workspaces, attaching twice can be a pretty significant operation.
The way caches can be specified by key means it could be much more robust. Thanks for the PR!
Very good latter point... unintended consequences could also be unintentionally large credit burn - for example a large workspace being unwittingly restored to a self-hosted runner.
Using caches should be negligble for such a small amount of data in any case.
Some thinking out loud regarding moving this behaviour to a cache:
$CIRCLE_WORKFLOW_ID
in the path to the thread info files to prevent this?notify
10 times per day, that's 150 cache entries lingering around? Although given the tiny amount of data being stored, never likely to be a huge problem for anyone. Regardless, it feels pertinent that we have the option of opting-in to this functionality via an additional parameter like persist_thread_information
and document accordingly.Any additional objections/observations @atanass @otremblay ?
Thanks guys! I changed the workspace persistence with cache storage. I ran a few corner case tests to cover complicated user inputs. I've intentionally included the thread_id parameter value in the cache key name because it would ensure there are no inconsistency of cache content in case of parallel job invocation.
Thanks guys! I changed the workspace persistence with cache storage. I ran a few corner case tests to cover complicated user inputs. I've intentionally included the thread_id parameter value in the cache key name because it would ensure there are no inconsistency of cache content in case of parallel job invocation.
@atanass one quick question - do you have a save_cache
example open you can look at to check the size of the resulting cache in your example? I'm sure its tiny as to be a mute point but would be good to sanity check.
Thanks guys! I changed the workspace persistence with cache storage. I ran a few corner case tests to cover complicated user inputs. I've intentionally included the thread_id parameter value in the cache key name because it would ensure there are no inconsistency of cache content in case of parallel job invocation.
@atanass one quick question - do you have a
save_cache
example open you can look at to check the size of the resulting cache in your example? I'm sure its tiny as to be a mute point but would be good to sanity check.
Yes, here's a snapshot:
Thanks guys! I changed the workspace persistence with cache storage. I ran a few corner case tests to cover complicated user inputs. I've intentionally included the thread_id parameter value in the cache key name because it would ensure there are no inconsistency of cache content in case of parallel job invocation.
@atanass one quick question - do you have a
save_cache
example open you can look at to check the size of the resulting cache in your example? I'm sure its tiny as to be a mute point but would be good to sanity check.Yes, here's a snapshot:
Ok fair to say the cache usage is... negligible
Your orb has been published to the CircleCI Orb Registry. You can view your published orb on the CircleCI Orb Registry at the following link: https://circleci.com/developer/orbs/orb/circleci/slack?version=4.13.1
Description: Add a new parameter to the
notify
command:thread
to enable Slack message replies in a thread.Motivation: https://github.com/CircleCI-Public/slack-orb/issues/378
Checklist: [x] All new jobs, commands, executors, parameters have descriptions. [ ] Usage Example version numbers have been updated. [ ] Changelog has been updated.