Open ssyeds opened 1 year ago
@ssyeds There're 2 steps in starting to manage existing infrastructure:
terraform import
commands.There have been a few request for an utility to generate the Terraform configuration file from the Artifactory instance. There's no current schedule on when that will be created.
Maybe a import block can work for your use case: https://developer.hashicorp.com/terraform/language/import Note: Import blocks are only available in Terraform v1.5.0 and later.
@alexhung can you give an example of, say what a terraform import
would look like for a local
remote
and virtual
repo, as well as maybe some user
s?
What would the import
syntax look like for an example?
https://developer.hashicorp.com/terraform/language/import
@chb0github The new import
block does half of the job already with the ability to generate the HCL from IDs. We can help with the remaining part and query JFrog and extract the resource type and IDs, and generate the import
blocks.
provider "artifactory" {
...
}
import {
to = artifactory_local_generic.my-generic-local
id = "my-generic-local"
}
Then
terraform import plan -generate-config-out=generated.tf
@chb0github I envision this tool will have ability to let user specify the 'category' to extract, e.g. local/remote/virtual/federated repos, security, configuration, etc. as well as specific resource type artifactory_permission_target
. Probably mutually exclusive cli args 😄
So the users would need to execute this tool multiple times to generate multiple import.tf
files, making this process iterative and manageable.
Yeah, I was thinking something like:
cat - <EOF > import.hcl
provider "artifactory" {
}
import {
$(curl -snLf https://my.artifactory.com/artifactory/api/repositories | jq 'map("
to = \(.packageType).\(key)
id = \"\(.key)\"
")
| .[]'
)
}
EOF
terraform import
If you give me
curl -snLf https://my.artifactory.com/artifactory/api/repositories
-> Which I think just gives an array of {name,key}
curl -snLf https://my.artifactory.com/artifactory/api/repositories/{key}
I am sure I can whip out something fast that will generate a monster HCL
I am feeling a bit bored ATM 😴
from a cli: mkimport --users --repositories > import.hcl
Not too challenging
@chb0github 😄 Now try this on an instance with 5000+ repos. And you must not DDoS the instance.
Sure. Np
looking at the docs, actually, only 1 single call is needed:
curl -snLf http://myart.comany.com/artifactory/api/repositories | jq -re '.[] | "\(.key) \(.type) \(.packageType)"'
libs-releases-local LOCAL Generic
libs-snapshots-local LOCAL Maven
here's the sample - There should be no scaling issue at all because, until you run terraform import
it's a single call on the server.
packages.json
[
{
"key" : "libs-releases-local",
"type" : "LOCAL",
"description" : "Local repository for in-house libraries",
"url" : "http://localhost:8081/artifactory/libs-releases-local",
"packageType": "Generic"
}, {
"key" : "libs-snapshots-local",
"type" : "LOCAL",
"description" : "Local repository for in-house snapshots",
"url" : "http://localhost:8081/artifactory/libs-snapshots-local",
"packageType": "Maven"
}
]
If we continue on this path a bit:
cat << EOF
provider "artifactory" {
}
import {
$(jq -re '.[] | "\(.key) \(.type) \(.packageType)"' packages.json | xargs -n 3 bash -c 'printf "
to = ${2,,}.${1,,}
id = \"${3,,}\""' _
)
}
EOF
provider "artifactory" {
}
import {
to = local.libs-releases-local
id = "generic"
to = local.libs-snapshots-local
id = "maven"
}
here is a loop/read version that should be faster since it doesn't spawn a shell per repo:
while read -r key _ package; do
echo "to = ${package,,}.${key,,}"
echo "id = \"${key,,}\""
done < <(jq -re '.[] | "\(.key) \(.type) \(.packageType)"' packages.json)
to = generic.libs-releases-local
id = "libs-releases-local"
to = maven.libs-snapshots-local
id = "libs-snapshots-local"
You know, I actually thought about this: technically you don't need terra form import: you can just interrogate all the resources yourself and generate the proper HCl
This generates the proper syntax:
printf '
provider "artifactory" {
}
'
while read -r key type package; do
printf '
import {
to = artifactory_%s_%s.%s
id = %s
}
' "${type,,}" "${package,,}" "${key,,}" "${key,,}"
done < <(jq -re '.[] | "\(.key) \(.type) \(.packageType)"' packages.json)
And, the way to handle importing some, but not others, is to create a function per resource type, then putting those in a set and executing them:
function importRepos {
while read -r key type package; do
cat <<-EOF
import {
to = artifactory_"${type,,}"_ "${package,,}". "${key,,}
id = "${key,,}
}
EOF
done < <(jq -re '.[] | "\(.key) \(.type) \(.packageType)"' packages.json)
}
function importUsers {
for i in {1..10}; do
local username="username-${RANDOM}-${i}"
cat <<-EOF
import {
to = artifactory_user.${username}
id = %s
}
EOF
done
}
resources=(importRepos importUsers importRepos)
for f in $(echo "${resources[@]}" | sort -u); do
$(f)
done
with this sort of thing, they could do mkimport --user --repos --users --repos
and only ever get users
and repos
once
You know, I actually thought about this: technically you don't need terra form import: you can just interrogate all the resources yourself and generate the proper HCl
That's the original design/plan for this tool.
@chb0github You should consider creating a public repo with these scripts 😄
I have... repeatedly... :) Let's sync up this morning and discuss an approach for this thing
I think what is/was being proposed is that you can generate the
import {
}
block, and then just run terraform plan --output-hcl-to
or whatever, and it will generate the HCL - I mean, after all, TF already know the structure of all your resources.
This issue was created before HashiCorp released the import
block update. So the original tool idea/plan was to create a end-to-end tool to query Artifactory for all the resources and generates the HCL.
Then the import
block was released and the request morphed into "let's leverage the new block" as it already does half of the work.
well, it can be done the way your suggest, and wouldn't be impossible. If you still wanna do it that way, you can throttle your calls easily enough:
curl -snLf https://my.artifactory.com/artifactory/api/repositories | jq -re '.[].key' |
xargs printf 'https://my.artifactory.com/artifactory/api/repositories/%s ' |
xargs -n 10 -p 10 curl -snLf | jq -sre 'flatten'
this last part is where the throttling is done:
xargs -n 10 -p 10 curl -snLf
the -n 10
says (basically) "pass 10 urls at a time to curl" (curl can do multi fetch, but I don't know if it forks off independent processes for the job or not), and the -p 10
tells xargs explicitly to manage thread/process pool for parallelization of no more than 10.
You were worried about DOSing the system. This allows you to dial in some default or let the user system override. If you just did a curl ... &
in a for loop you'd exhaust your available system process count AND also potentially cause errors on the server side and choke either it or any firewall it's using.
So, which approach were you thinking?
@ssyeds - This pr is almost merged in. I'll be curious to get your input
Being able to mass import an existing infrastructure into terraform templates would be very helpful. Currently from my understanding the only way to do this is to do a terraform import for every resource individually which can lead to a long list of TF templates being created. A mass import would allow for a much cleaner way of readability and usage.