I often find myself wanting to use ydm to manage my infrastructure/apps/etc -- as in I want to compose an app from existing images. The way I've been managing things whether through docker or with the help of ydm has basically been to create a directory called "docker" in my home folder, I call it the work directory.
the work directory
In here I have 2 different patterns of usage. I have my straight up docker usage... (no ydm, just pure docker commands) these end up being executable bash files. e.g. I have a script named "sinopia" with contents:
and then I have my ydm usage pattern which consists of a directory nested in the work-dir called "ydm" where I manage my own "mental" scope and additional scripts.
for gitlab, ydm accepts a json file to set your environment... so in my ydm/gitlab directory i have a file env.json that looks like this:
so as you can see, ydm helps me be more organized and stay in control of a complex setup like gitlab. simple images don't require ydm's help.
now--this issue is taking this as a lesson and presents a new problem: not everything is open source.... and most things complex are gonna be proprietary.... i need a place to create my own private drops and ydm needs to know how to use those and elegantly allow me to dev on them. let's create a 'ydm init' command.
ydm init
let's assume that ydm did not ship with a gitlab drop and gitlab happens to be our proprietary thing. how can we leverage ydm to make using it in our infrastructure more manageable, especially using it in different contexts, departments, etc that may be siloed? (in other words how can we "define" gitlab deployment without being too opinionated, without forcing a certain type of configuration?) well we would generate a ydm drop for it as it is not a trivial usage of docker.
I often find myself wanting to use ydm to manage my infrastructure/apps/etc -- as in I want to compose an app from existing images. The way I've been managing things whether through docker or with the help of ydm has basically been to create a directory called "docker" in my home folder, I call it the work directory.
the work directory
In here I have 2 different patterns of usage. I have my straight up docker usage... (no ydm, just pure docker commands) these end up being executable bash files. e.g. I have a script named "sinopia" with contents:
and then I have my ydm usage pattern which consists of a directory nested in the work-dir called "ydm" where I manage my own "mental" scope and additional scripts.
for gitlab, ydm accepts a json file to set your environment... so in my ydm/gitlab directory i have a file env.json that looks like this:
I also have 2 executable scripts:
one named
apply
and one nameddestroy
so as you can see, ydm helps me be more organized and stay in control of a complex setup like gitlab. simple images don't require ydm's help.
now--this issue is taking this as a lesson and presents a new problem: not everything is open source.... and most things complex are gonna be proprietary.... i need a place to create my own private drops and ydm needs to know how to use those and elegantly allow me to dev on them. let's create a 'ydm init' command.
ydm init
let's assume that ydm did not ship with a gitlab drop and gitlab happens to be our proprietary thing. how can we leverage ydm to make using it in our infrastructure more manageable, especially using it in different contexts, departments, etc that may be siloed? (in other words how can we "define" gitlab deployment without being too opinionated, without forcing a certain type of configuration?) well we would generate a ydm drop for it as it is not a trivial usage of docker.