Open snoop123 opened 7 years ago
In principal this should be no problem. The way it works is that we create a git repo in a working folder, then checkout one change set at the time into this directory, then we add the changed files to the git repo and commit it using the time and committer informations of the RTC changeset
I understand the process, but could there be a scale issue? I see there are few parameters in migration.properties regarding heap / cache / windows size, but couldn't find instruction of how to define them. Could you maybe elaborate?
As far as I remember our migration we increased the packedgitlimit
from the default 10m to about 500m as suggested in the following stackoverflow article. That's also the only tuning hint I know about. I hope this helps a bit...
Thanks!
Hi, I have a very big project with large number of files and also changed the packagedlimit to 500m, but getting java.lang.outofmemoryerror java heap space. Any pointers here would be appreciated.
@himmakam Java OOM can be sorted by ensuring that JVM (Java Virtual Machine) arguments increasing the memory limit are passed to the JVM that's running the RTC CLI. It'll be something like setting JAVA_OPTS=-Xmx4g or editing the "scm" script to include the argument "-Xmx4g" to do that. Adjust "4g" (four gigs) to whatever is necessary - the default is pretty small.
FYI https://stackoverflow.com/questions/14763079/what-are-the-xms-and-xmx-parameters-when-starting-jvms has more info on what the argument means. The rest is just "plumbing" to ensure that the parameter gets passed to Java when the command "java
I hope it works for you - I suspect I'm going to have to use this tool pretty soon myself...
Does this project support migration of large amounts of changesets, and large sizes of changesets? (we have changesets that contain hundreds of files)
Can I somehow fine-tune this process and improve running time? maybe by changing migration.properties parameters?