Unfortunately, all cloud vendors do not provide a friendly API to list all public cloud services and categories, as listed on AWS Products, GCP Products and Azure Services pages.
The idea is to have a unified JSON
schema for all cloud services.
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "array",
"items": [
{
"type": "object",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"summary": {
"type": "string"
},
"url": {
"type": "string"
},
"categories": {
"type": "array",
"items": [
{
"type": "object",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"id",
"name"
]
}
]
},
"tags": {
"type": "array",
"items": [
{
"type": "string"
}
]
}
},
"required": [
"id",
"name",
"summary",
"url",
"categories",
"tags"
]
}
]
}
The AWS Products page uses undocumented https://aws.amazon.com/api/dirs/items/search
endpoint to fetch paged JSON records for available cloud products.
# download AWS service JSON file and generate data/aws.json
pip install -r requirements.txt
python discovery/aws.py > data/aws.json
The GCP Products page is rendered on the server side and all data is embedded into the web page.
# scrap GCP Products page to get all services and generate data/gcp.json
pip install -r requirements.txt
python discovery/gcp.py > data/gcp.json
The Azure Services page is rendered on the server side and all data is embedded into the web page.
# scrap Azure Services page to get all services and generate data/azure.json
pip install -r requirements.txt
python discovery/azure.py > data/azure.json
Edit the ms365.json
file. Use data from this page.
The page page contains all Google Workspace services.
# scrap Google Workspace page to get all services and generate data/gsuite.json
pip install -r requirements.txt
python discovery/gsuite.py > data/gsuite.json
Edit the cmp.json
file. Use the CMP UI and documentation.
Edit the credits.json
file.
Run the tags.sh
script to regenerate the tags.json
file that contains all platform, category and services tags from all services.
Upload all generated json
files to the public cloud_tags Cloud Storage bucket.
Focus Areas support specific services and categories based on this repo.
Updates to service/category mappings to Focus Areas are performed using the following process, and then updating the zenrouter-infra repo with the output.
product
- This is the name of the product, take from the name
attribute in the cloud catalog filesplatform, p_group, focus_area
- these values must be match one of the Focus Areas defined in FocusAreas.tsvsupport_level
- This must be PRIMARY or SECONDARY, mapped to ZenRouter skills tierstatus
- only entries with status VERIFIED
will be processed by the build processmeets_volume_criteria
and support_level_desc
can be ignored and set to any value, this was used for initial FA mappingproduct platform p_group focus_area support_level status meets_volume_criteria support_level_desc
AWS Amplify AWS Infrastructure DevOps SECONDARY VERIFIED N/A N/A
platform, p_group, focus_area
- these values must be match one of the Focus Areas defined in FocusAreas.tsvplatform_tag
- this is the actual category tagsupport_level
- This must be PRIMARY or SECONDARY, mapped to ZenRouter skills tierplatform p_group focus_area category_tag support_level
AWS Data Databases aws/category/migration SECONDARY
# Python 3.12 breaks PySpark due to removal of distutils, as such, Python3.11 is required
brew install python@3.11
brew install java
mkdir -p build
python3.11 -m venv build/venv
source build/venv/bin/activate
cd focus_areas/
python -m pip install -r requirements.txt
python ./build_focus_areas.py
# verify the product was added to the focus area in data/focus_areas/all.json, then commit/push
git add data/focus_areas/
git commit -m "Added xyz Product to abc Focus Area"
git push
Once merged into master, deploy the changes into BigQuery
# Valid ADC required to deploy this, or a configured service account
gcloud auth application-default login
git checkout master
git pull
mkdir -p build
python3 -m venv build/venv
source build/venv/bin/activate
cd focus_areas/
python -m pip install -r requirements.txt
python ./deploy_to_bq.py --build --deploy --project doit-zendesk-analysis
Once merged into master, deploy the changes into ZenRouter Infra
git checkout master
git pull
mkdir -p build
python3 -m venv build/venv
source build/venv/bin/activate
cd focus_areas/
python -m pip install -r requirements.txt
python ./generate_hcl.py | pbcopy
variable "focus_areas"
default
attribute
variable "focus_areas" {
type = map(object({
id = string
name = string
practice_area = string
primary_skills = list(string)
secondary_skills = list(string)
}))
default =
<Paste the output from generate_hcly.py here>
}
# initialize terraform
terraform init
# validate the file has no syntax issues
terraform validate
# format the file before commit
terraform fmt datastore.tf