bakhirev / assayo

git log analysis and visualization for team lead
https://assayo.online/
Other
216 stars 23 forks source link

Support for multirepo? #56

Closed mihaigalos closed 1 month ago

mihaigalos commented 1 month ago

Hi,

Awesome stuff. This works well for a monorepo. Any plans to offer support for a multirepo setup? I tried the following but maybe this is not approach here - doesn't work:

Details

```bash #!/bin/bash run_for_one_repo(){ repo=$1 subfolder="projects/$repo" git remote add "$repo" ../"$repo" git fetch "$repo" # Fetch LFS objects git checkout "$repo/main" -- .gitattributes git lfs fetch "$repo" --all # Using subtree to add and merge preserving full history git subtree add --prefix="$subfolder" "$repo/main" } # Define the list of repositories repos=( 'foo-one' 'bar-two' ) git init git lfs install for repo in "${repos[@]}"; do run_for_one_repo "$repo" done ```

bakhirev commented 1 month ago

You need https://github.com/bakhirev/assayo-crawler (or https://hub.docker.com/r/bakhirev/assayo-crawler) But in this moment I not translated doc on English. This is server on NodeJS. It bypasses the list of repositories and collects one log file (or makes many log files)

bakhirev commented 1 month ago

Please let us know if you have managed to solve the problem using Craler service. I'm working on improving it.

mihaigalos commented 1 month ago

Hi @bakhirev,

Google Translate works for websites, too. So no language problem. 😄

I just want to read an input folder containing some subfolders, generate a log which I can then upload to the UI. Seems the crawler wants me to create a JSON of where to download from, etc.

I mean I already have the folders and I mount them to the input via docker:

pushd $(mktemp -d)
docker run --rm -it --name assayo-crawler -p 8091:80 --mount type=bind,source=/Users/foo/projectsNG,destination=/usr/src/assayo_crawler/input --mount type=bind,source=$(pwd),destination=/usr/src/assayo_crawler/output  bakhirev/assayo-crawler

Can you simplify the crawler to look in the input and create the log without any clicking in the crawler UI?

bakhirev commented 1 month ago

Seems the crawler wants me to create a JSON of where to download from, etc.

yes

Can you simplify the crawler to look in the input and create the log without any clicking in the crawler UI?

Yes, i can. But i took this task to work, but not this month. I'm currently editing a python package and making edits from other people and redesigning the site. Maybe in november I'll go back to work on Crawler and do this.

It seems to me that it is worth writing a bash command without Crawler specifically for this case. I'll think about it this weekend if there's any way to do it in a couple of lines.

bakhirev commented 1 month ago

@mihaigalos special for you, my friend )) Download this txt file.

  1. Download mihaigalos.txt in /Users/foo/mihaigalos.js
  2. Rename txt tojs
  3. Run node mihaigalos.js
  4. You common log was be in /Users/foo/log.txt

Script search only one level subdir, If you need recursive - write me. Did that solve the problem?