Closed GhettoKiwi closed 1 year ago
We agree that this has to be improved for COBOL and PL/I.
Do you do any HLASM development? It would be interesting to compare the numbers for fetching SYS1.MACLIB macros to the numbers of fetching copybook files in COBOL, as we had experimented with a parallel loading strategy for our HLASM language server.
We sadly do not have any HLASM development, no
Are there any progress on this request?
Unfortunately, this can't be a quick fix, but we do want to improve this.
Hi,
For information, below is a description of the solution that that we are in the process of setting up and which could meet your needs.
We have distributed our sources in different Git repositories according to their nature:
In our (programs) Git repositories, in a dedicated folder, we have configuration files for each type of Git branch of our Git workflow that we use to manage our builds. In each of these files we declare the Git Repositories, and wich Git branch, required for the current Repository builds. So we declare the Git repositories that contain the copybooks necessary for the constructions, but also for editing the source code of the COBOL programs.
For VS Code and therefore Z Open Editor, we have developed a private extension which is responsible for cloning and updating the Git repositories containing the copybooks. The cloning is done in a dedicated folder in the main Git repository (programs) which is declared in .gitignore, so not tracked. When VS Code starts, the extension checks if the secondary Git repositories of the copybooks are present, and if they are not, it takes care of cloning them. When changing a Git branch, the extension repeats this check for the presence of secondary Git repositories, and in addition changes and refresh the Git branch in these secondary repositories according to the settings that we have implemented in the configuration files by Git branches from the Main Depot.
The same technique could be used to transfer complete PDS contents from the z/OS remote site, with override mechanisms to reproduce the equivalent of a PDS concatenation: first transfer the last concatenation level PDS , then in overwriting transfer the other PDS in the opposite direction of the concatenation.
We also use the Git sparse-checkout option to filter the copybooks present in the working area to meet an architectural rule concerning the level of sharing of copybooks, (from totally private, to totally public, with intermediate levels responding to the urban plan of our information system).
The zapp.yaml file is configured to (also) search for copybooks in the Copybook Secondary Repositories clone folder.
Thus, all copybooks are searched locally, and in addition we use versions of copybooks that depend on the Git branch in use in the main repository.
There is a small overhead when cloning secondary repositories, but it more than compensates for the cost of searching one-by-one copybooks managed in PDS on the remote z/OS site (and incidentally does not require an account to access the remote site z/OS).
The same secondary repositories cloning mechanism of copybooks is implemented in our Jenkins pipeline to prepare the workspace needed for builds by IBM Dependency Based Build.
We made some minor improvements for this issue in v3.0.0, but leave tis item open as we are working on a more significant update for this issue.
Hi,
In our solution, with ZOE 3.0.0, also described in #293, searching for a local copybook takes an average of 0.3 seconds, (SSD disk) including writing the warning message in the ZOE log about the number of folders scanned, (and without this management, the time would still be less than 0.3 seconds).
As we use IBM DBB to build our programs, the management of copybooks in Git repositories is sufficient. But nothing prohibits having a double management:
Note that IBM DBB itself makes a copy from the GIT repositories to working PDSs for COBOL compilation because the compiler does not know how to work with COPYBOOKs stored in USS paths. This remains transparent since it is integrated into the zAppbuild scripts.
We now implemented parallel downloads from MVS with z/OSMF and RSE API (up to 5) for COBOL copybooks in v3.0.1. We will explore more options in the future and want to allow users to tweak this behavior with settings. We also observed that on slower Windows PCs that the language server does not find resources for parallel download threads and reverts back to one at the time, which is something we want to fix. We also want to port this to PL/I and REXX.
Currently, we also see performance issues when using GLOB patterns for local files that require many directories to be searched, which we want to improve as well.
Closing this issue for now. Let's open new ones for any additional items.
Added the same improvements for PL/I. Also improved loading local files in Z Open Editor v3.1.0.
We now implemented parallel downloads from MVS with z/OSMF and RSE API (up to 5) for COBOL copybooks in v3.0.1.
Be carefull of user revocation on z/OS with expired password. For us, recovation is triggered after 3 access with bad or expired password.
Let us know if this still happens to you, because we have a separate connection test that must succeed first before we process the parallel download queue. If that does not work in specific situations then let us know in a new issue and we will fix it.
I would like to have faster "fetch"-times for copybooks for my COBOL-modules. We have done different tests in different scenarios, and found that the average "fetch"-time for a single copybook is 1,41 seconds. It is especially slow on modules containing ALOT of copybooks. This feels very slow, especially because the editor doesn't seem to reference the copybooks as they are downloaded, but waits untill every single copybook has been downloaded.
The download time can be extremely annoying - Especially if you have a module that contains 231 copybooks. (See the attached document) You have to wait for 5 minutes and 20 seconds, before you are able to see the source of a copybook. We are aware of the amount of libraries has an impact on the time - In regards to our situation, we have 4 different environments, and each environment has 4 different hierarchies => 16 libraries, then we have a personal COBOL library, so in total we have 17 libraries - The list of libraries are prioritized, so if a copybook is found in the first library, that is the one that should be used. (Filenames and paths has been redacted + This is an extreme example, but you get the idea)
ZOpenEditor Fetch Copybook Times.docx
Could this process be optimized in anyway? Just an idea for potential optimization:
Other suggestions: Progress bar that shows an estimate for how long you have to wait before the copybooks has been fetched. A copybook source is referenced when it has been downloaded, instead of waiting untill everything has been downloaded.