Open cliedeman opened 4 years ago
Great idea 👍 I would suggest to use a separate organization for testing (like go-semantic-release-test
)
@christophwitzko can you create a sub group on gitlab. I have created the github org already
I will 👍 Can you change my role to owner for the go-semantic-release-test org?
Created the test
subgroup: https://gitlab.com/go-semantic-release/test
Should I add you as owner? What is your GitLab username?
@christophwitzko my username is cliedeman on gitlab. I made you an owner.
I have generated the first pass sample repo here - https://github.com/go-semantic-release-test/test
Used it to test the pagination on the gitlab provider and picked up some fun behaviour... https://github.com/go-semantic-release/provider-gitlab/pull/2
I havent figured out the next step for the e2e tests.
The code used to generate the sample repo is here. https://github.com/go-semantic-release-test/generate
@cliedeman Thanks, added you too :)
I also have not thought about the next steps, but hopefully, I will have some time next week.
Got a bit further.
The generate repo is populating the test repo on master ci run. I am getting a failure pushing to gitlab thought (both are using a deploy key with write permissions). It works locally but fails on ci. I think its due to the ssh fingerprint not being trusted
https://github.com/go-semantic-release-test/generate/runs/1330832285
My next thought is how to run the actual integration test. Do I run the semantic-release binary directly or invoke it in process? Secondly is how to assert the output. My current favourite plan is to create a custom hook plugin that writes our the results to files
Edit: investigate if deleting the tag on a repo deletes the corresponding release.
I want to add some end to end tests to avoid issues like https://github.com/go-semantic-release/provider-gitlab/issues/1
My plan is to create a sample repo (Probably a bash script that generates a lot of commits and with branches for each test scenario). Place this repo into github and mirror it to gitlab.
Then we use dry-run tests to verify the output and run them only on master to avoid concurrency issues.