Skip to content

Cloud Native MQ GitOps Configuration

In the cluster configuration of this chapter, we installed the fundamental components for continuous integration and continuous deployment into our cluster. These included a sample GitOps repository, and ArgoCD.

In this section we're going to customize and enable the GitOps repository, so that we can install the components highlighted in our CP4S CICD process:

We'll examine these components in more detail throughout this section of the tutorial; here's an initial overview of their function:

  • ci-namespace provide an execution namespace for our pipelines
  • tools-namespace provide an execution namespace for tools such as Artifactory
  • cp4s-namespace provide an execution namespace for our deployed CP4S when running in the development environment. Later in the tutorial, we'll add staging and prod namespaces.
  • ArgoCD applications manage the ci,tools and dev namespaces.

You may have already noticed that the GitOps repository contains YAMLs that refer to the ArgoCD and namespaces in the diagram above. We're now going to use these YAMLs to configure the cluster resources using GitOps.

Becoming comfortable with the concepts by practicing GitOps will help us in later chapters when we CP4S Cloud Pak

In this topic, we're going to:

  • Explore the sample GitOps repository in a little more detail
  • Customize the GitOps repository for our cluster
  • Connect ArgoCD to the customized GitOps repository
  • Bootstrap the cluster
  • Explore how the ci namespace is created
  • Try out some dynamic changes to the cluster with ArgoCD
  • Explore how ArgoCD manages configuration drift

By the end of this topic we'll have a cluster up and running, having used GitOps to do it. We'll fully understand how ArgoCD manages cluster change and configuration drift.


Pre-requisites

Before attempting this section, you must have completed the following tasks:

  • You have created an OCP cluster instance.
  • You have installed on your local machine the oc command that matches the version of your cluster.
  • You have also installed npm, git and tree commands.
  • You have customized the GitOps repository and ArgoCD.

Please see the previous sections of this guide for information on how to do these tasks.


Video Walkthrough

This video demonstrates how to use ArgoCD and the GitOps repository to set up different infra related components.

This is a video walkthrough and it takes you step by step through the below sections.


The sample GitOps repository

Let's understand how GitOps works in practice by using it to install the components we've highlighted in the above diagram.

Let's first look at the high level structure of the multi-tenancy-gitops GitOps repository.

  1. Ensure you're logged in to the cluster

    Tip

    Ensure you're in the terminal window that you used to set up your cluster, i.e. in the multi-tenancy-gitops subfolder.

    Log into your OCP cluster, substituting the --token and --server parameters with your values:

    oc login --token=<token> --server=<server>
    

    If you are unsure of these values, click your user ID in the OpenShift web console and select "Copy Login Command".


  2. Locate your GitOps repository

    If necessary, change to the root of your GitOps repository, which is stored in the $GIT_ROOT environment variable.

    Issue the following command to change to your GitOps repository:

    cd $GIT_ROOT
    cd multi-tenancy-gitops
    


  3. Explore the high level folder structure

    Use the following command to display the folder structure:

    tree . -d -L 2
    

    We can see the different folders in the GitOps repository:

    .
    ├── 0-bootstrap
    │   ├── others
    │   └── single-cluster
    ├── doc
    │   ├── diagrams
    │   ├── experimental
    │   ├── images
    │   └── scenarios
    ├── scripts
    │   └── bom
    └── setup
        ├── ocp47
        └── ocp4x
    

    The 0-bootstrap folder is the key folder that contains different profiles for different cluster topologies. A Cluster Profile such as single-cluster will control resources are deployed to a single cluster. We'll be using the single-cluster profile, although you can see that other profiles are available.

    There are other folders containing utility scripts and some documentation; we'll explore these later.


  4. The 0-bootstrap folder

    The process of installing components into a cluster is called bootstrapping because it's the first thing that happens to a cluster after it has been created. We will bootstrap our cluster using the 0-bootstrap folder in the multi-tenancy repository.

    Let us examine the 0-bootstrap folder structure:

    tree ./0-bootstrap/ -d -L 2
    

    We can see the different folders:

    ./0-bootstrap/
    ├── others
    │   ├── 1-shared-cluster
    │   ├── 2-isolated-cluster
    │   └── 3-multi-cluster
    └── single-cluster
        ├── 1-infra
        ├── 2-services
        └── 3-apps
    

    Notice the different cluster profiles. We're going to use the single-cluster profile. See how this profile has three sub-folders corresponding to three layers of components: infrastructure, services and applications. Every component in our architecture will be in one of these layers.


  5. The single-cluster profile in more detail

    Use the following command to display single-cluster folder in more detail:

    tree ./0-bootstrap/single-cluster/ -L 2
    

    We can see the different folders:

    ./0-bootstrap/single-cluster/
    ├── 1-infra
    │   ├── 1-infra.yaml
    │   ├── argocd
    │   └── kustomization.yaml
    ├── 2-services
    │   ├── 2-services.yaml
    │   ├── argocd
    │   └── kustomization.yaml
    ├── 3-apps
    │   ├── 3-apps.yaml
    │   ├── argocd
    │   └── kustomization.yaml
    ├── bootstrap.yaml
    └── kustomization.yaml
    

    Again, see the different layers of the architecture: infrastructure, service and application.

    Notice how each of these high level folders (1-infra, 2-services, 3-applications) has an argocd folder. These argocd folders contain the ArgoCD applications that control which resources in that architectural layer are deployed to the cluster.


  6. ArgoCD applications

    Later in this tutorial, we'll see in detail how these ArgoCD applications work. For now, let's explore the range of ArgoCD applications that control the infrastructure components deployed to the cluster.

    Type the following command :

    tree 0-bootstrap/single-cluster/1-infra/
    

    It shows a list of ArgoCD applications that are used to manage Kubernetes infrastructure resources:

    0-bootstrap/single-cluster/1-infra/
    ├── 1-infra.yaml
    ├── argocd
    │   ├── consolelink.yaml
    │   ├── consolenotification.yaml
    │   ├── infraconfig.yaml
    │   ├── machinesets.yaml
    │   ├── namespace-baas.yaml
    │   ├── namespace-ci.yaml
    │   ├── namespace-cloudpak.yaml
    │   ├── namespace-db2.yaml
    │   ├── namespace-dev.yaml
    │   ├── namespace-ibm-common-services.yaml
    │   ├── namespace-instana-agent.yaml
    │   ├── namespace-istio-system.yaml
    │   ├── namespace-mq.yaml
    │   ├── namespace-openldap.yaml
    │   ├── namespace-openshift-storage.yaml
    │   ├── namespace-prod.yaml
    │   ├── namespace-robot-shop.yaml
    │   ├── namespace-sealed-secrets.yaml
    │   ├── namespace-spp-velero.yaml
    │   ├── namespace-spp.yaml
    │   ├── namespace-staging.yaml
    │   ├── namespace-tools.yaml
    │   ├── scc-wkc-iis.yaml
    │   ├── serviceaccounts-db2.yaml
    │   ├── serviceaccounts-ibm-common-services.yaml
    │   ├── serviceaccounts-mq.yaml
    │   ├── serviceaccounts-tools.yaml
    │   └── storage.yaml
    └── kustomization.yaml
    

    Notice the many namespace- YAMLs; we'll see in a moment how these each define an ArgoCD application dedicated to managing a Kubernetes namespace in our cluster. the serviceaccount- YAMLs can similarly manage service accounts.

    There are similar ArgoCD applications for the service and application layers. Feel free to examine their corresponding folders; we will look at them in more detail later.


  7. A word on terminology

    As we get started with ArgoCD, it can be easy to confuse the term ArgoCD application with your application. That's because ArgoCD uses the term application to refer to a Kubernetes custom resource that was initially designed to manage a set of application resources. However, an ArgoCD application can automate the deployment of any Kubernetes resource within a cluster, such as a namespace, as we'll see a little later.


Why customize

We're now going to customize two of the sample GitOps repositories to enable them to be used by ArgoCD to deploy Kubernetes resources to your particular cluster. The process of customization is important to understand; we will use it throughout the tutorial.

Recall the different roles of the four repositories:

  • multi-tenancy-gitops is the main GitOps repository. It contains the ArgoCD YAMLs for the fixed components in the cluster such as a namespace or a Tekton or SonarQube instance.
  • multi-tenancy-gitops-infra is a library repository. It contains infrastructure YAMLs referred to by the ArgoCD YAMLs that manage infrastructure in the cluster, such as a namespace.
  • multi-tenancy-gitops-services is a library repository. It contains the service YAMLs referred to the ArgoCD YAMLs that manage services in the cluster such as Tekton or SonarQube.
  • multi-tenancy-gitops-apps is a GitOps repository containing the user application components that are deployed to the cluster. These components include applications, databases and queue managers, for example.

It is the multi-tenancy-gitops and multi-tenancy-gitops-apps repositories that need to be customized for your cluster. The other two repositories don't need to be customized because they perform the function of a library.

Tailor multi-tenancy-gitops

Let now customize the multi-tenancy-gitops repository for your organization. The repository provides a script that uses the $GIT_ORG environment variable to replace the generic values in the cloned repository with those of your GitHub organization.

Once customized, we'll push our updated repository back to GitHub where it can be accessed by ArgoCD.

  1. Set your git branch

    In this tutorial, we use the master branch of the multi-tenancy-gitops repository. We will store this in the $GIT_BRANCH environment variable for use by various scripts and commands.

    Issue the following command to set its value:

    export GIT_BRANCH=master
    

    You can verify your $GIT_BRANCH as follows:

    echo $GIT_BRANCH
    


  2. Run the customization script

    We customize the cloned multi-tenancy-gitops repository to the relevant values for our cluster using the set-git-source.sh script. This script will replace various YAML elements in this repository to refer to your git organization via a GitHub URL.

    Now run the script:

    ./scripts/set-git-source.sh
    

    The script will list customizations it will use and all the files that it customizes:

    Setting kustomization patches to https://github.com/prod-ref-guide/multi-tenancy-gitops.git on branch master
    Setting kustomization patches to https://github.com/prod-ref-guide/multi-tenancy-gitops-infra.git on branch master
    Setting kustomization patches to https://github.com/prod-ref-guide/multi-tenancy-gitops-services.git on branch master
    Setting kustomization patches to https://github.com/prod-ref-guide/multi-tenancy-gitops-apps.git on branch master
    done replacing variables in kustomization.yaml files
    git commit and push changes now
    


  3. Explore the customization changes

    You can easily identify all the files that have been customized using the git status command.

    Issue the following command:

    git status
    

    to view the complete set of changed files in the multi-tenancy-gitops repository:

    On branch master
    Your branch is up to date with 'origin/master'.
    
    Changes not staged for commit:
      (use "git add <file>..." to update what will be committed)
      (use "git restore <file>..." to discard changes in working directory)
            modified:   0-bootstrap/bootstrap.yaml
            modified:   0-bootstrap/others/1-shared-cluster/bootstrap-cluster-1-cicd-dev-stage-prod.yaml
            modified:   0-bootstrap/others/1-shared-cluster/bootstrap-cluster-n-prod.yaml
            ...
            modified:   0-bootstrap/single-cluster/kustomization.yaml
    

    (We've abbreviated the list.)

    You can see the kind of changes that the script has made with git diff command:

    git diff 0-bootstrap/bootstrap.yaml
    

    which calculates the differences in the file as:

    diff --git a/0-bootstrap/bootstrap.yaml b/0-bootstrap/bootstrap.yaml
    index 0754133..7303c53 100644
    --- a/0-bootstrap/bootstrap.yaml
    +++ b/0-bootstrap/bootstrap.yaml
    @@ -10,8 +10,8 @@ spec:
       project: default
       source:
         path: 0-bootstrap/single-cluster
    -    repoURL: ${GIT_BASEURL}/${GIT_ORG}/${GIT_GITOPS}
    -    targetRevision: ${GIT_GITOPS_BRANCH}
    +    repoURL: https://github.com/tutorial-org-123/multi-tenancy-gitops.git
    +    targetRevision: master
       syncPolicy:
         automated:
           prune: true
    

    See how the repoURL and targetRevision YAML elements now refer to your organization in GitHub rather than a generic name.

    By making YAMLs like this active in your cluster, ArgoCD is able to refer to your repository to manage its contents.


  4. Add the changes to a git index, ready to push to GitHub

    Now that we've customized the local clone of the multi-tenancy-gitops repository, we should commit the changes, and push them back to GitHub where they can be accessed by ArgoCD.

    Firstly, we the changes to a git index:

    git add .
    


  5. Commit the changes to git

    We then commit these changes to git index:

    git commit -s -m "GitOps customizations for organization and cluster"
    

    which will show the commit message:

    [master a900c39] GitOps customizations for organization and cluster
     46 files changed, 176 insertions(+), 176 deletions(-)
    


  6. Push changes to GitHub

    Finally, we push this commit back to the master branch on GitHub:

    git push origin $GIT_BRANCH
    

    which shows that the changes have now been pushed to your GitOps repository:

    Enumerating objects: 90, done.
    Counting objects: 100% (90/90), done.
    Delta compression using up to 8 threads
    Compressing objects: 100% (47/47), done.
    Writing objects: 100% (48/48), 4.70 KiB | 1.57 MiB/s, done.
    Total 48 (delta 32), reused 0 (delta 0)
    remote: Resolving deltas: 100% (32/32), completed with 23 local objects.
    To https://github.com/tutorial-org-123/multi-tenancy-gitops.git
       d95eca5..a900c39  master -> master
    

    We've now customized the multi-tenancy-gitops repository for your organization, and successfully pushed it to GitHub.



Connect ArgoCD to the GitOps repository

Let's now connect your customized GitOps repository to the instance of ArgoCD running in the cluster. Once connected, ArgoCD will use the contents of the repository to create necessary resources.

  1. Locate your GitOps repository

    If necessary, change to the root of your GitOps repository, which is stored in the $GIT_ROOT environment variable.

    Issue the following command to change to your GitOps repository:

    cd $GIT_ROOT
    cd multi-tenancy-gitops
    
  2. Review ArgoCD infrastructure folder

    Let's examine the 0-bootstrap/single-cluster/1-infra/kustomization.yaml to see how ArgoCD manages the resources deployed to the cluster.

    Issue the following command:

    cat 0-bootstrap/single-cluster/1-infra/kustomization.yaml
    

    We can see the contents of the kustomization.yaml:

    resources:
    #- argocd/consolelink.yaml
    #- argocd/consolenotification.yaml
    #- argocd/namespace-ibm-common-services.yaml
    #- argocd/namespace-ci.yaml
    #- argocd/namespace-dev.yaml
    #- argocd/namespace-staging.yaml
    #- argocd/namespace-prod.yaml
    #- argocd/namespace-cloudpak.yaml
    #- argocd/namespace-istio-system.yaml
    #- argocd/namespace-openldap.yaml
    #- argocd/namespace-sealed-secrets.yaml
    #- argocd/namespace-tools.yaml
    #- argocd/namespace-instana-agent.yaml
    #- argocd/namespace-robot-shop.yaml
    #- argocd/namespace-openshift-storage.yaml
    #- argocd/namespace-spp.yaml
    #- argocd/namespace-spp-velero.yaml
    #- argocd/namespace-baas.yaml
    #- argocd/serviceaccounts-tools.yaml
    #- argocd/storage.yaml
    #- argocd/infraconfig.yaml
    #- argocd/machinesets.yaml
    patches:
    - target:
        group: argoproj.io
        kind: Application
        labelSelector: "gitops.tier.layer=infra"
      patch: |-
        - op: add
          path: /spec/source/repoURL
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops-infra.git
        - op: add
          path: /spec/source/targetRevision
          value: master
    

    Notice that the resources that needs to be applied to the cluster are all inactivate and commented out.

    Let us enable the resources that are needed by un-commenting them.

  3. Deploy Kubernetes resources with kustomization.yaml

    Open 0-bootstrap/single-cluster/1-infra/kustomization.yaml and uncomment the below resources:

    argocd/namespace-ci.yaml
    argocd/consolenotification.yaml
    argocd/namespace-ibm-common-services.yaml
    argocd/namespace-tools.yaml
    

    You will have the following resources un-commented for infrastructure:

    resources:
    #- argocd/consolelink.yaml
    - argocd/consolenotification.yaml
    - argocd/namespace-ibm-common-services.yaml
    - argocd/namespace-ci.yaml
    #- argocd/namespace-dev.yaml
    #- argocd/namespace-staging.yaml
    #- argocd/namespace-prod.yaml
    #- argocd/namespace-cloudpak.yaml
    #- argocd/namespace-istio-system.yaml
    #- argocd/namespace-openldap.yaml
    #- argocd/namespace-sealed-secrets.yaml
    - argocd/namespace-tools.yaml
    #- argocd/namespace-instana-agent.yaml
    #- argocd/namespace-robot-shop.yaml
    #- argocd/namespace-openshift-storage.yaml
    #- argocd/namespace-spp.yaml
    #- argocd/namespace-spp-velero.yaml
    #- argocd/namespace-baas.yaml
    #- argocd/serviceaccounts-tools.yaml
    #- argocd/storage.yaml
    #- argocd/infraconfig.yaml
    #- argocd/machinesets.yaml
    patches:
    - target:
        group: argoproj.io
        kind: Application
        labelSelector: "gitops.tier.layer=infra"
      patch: |-
        - op: add
          path: /spec/source/repoURL
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops-infra.git
        - op: add
          path: /spec/source/targetRevision
          value: master
    

    Commit and push changes to your git repository:

    git add .
    git commit -s -m "Intial boostrap setup for infrastructure"
    git push origin $GIT_BRANCH
    

    The changes have now been pushed to your GitOps repository:

    Enumerating objects: 11, done.
    Counting objects: 100% (11/11), done.
    Delta compression using up to 8 threads
    Compressing objects: 100% (6/6), done.
    Writing objects: 100% (6/6), 576 bytes | 576.00 KiB/s, done.
    Total 6 (delta 5), reused 0 (delta 0)
    remote: Resolving deltas: 100% (5/5), completed with 5 local objects.
    To https://github.com/prod-ref-guide/multi-tenancy-gitops.git
       a900c39..e3f696d  master -> master
    
  4. Examine bootstrap.yaml residing in 0-bootstrap/single-cluster/.

    The bootstrap.yaml file is used to create our first ArgoCD application called bootstrap-single-cluster. This initial ArgoCD application will create all the other ArgoCD applications that control the application, service, and infrastructure resources (such as the ci and dev namespaces) deployed to the cluster.

    Examine the YAML that defines the ArgoCD bootstrap application:

    cat 0-bootstrap/single-cluster/bootstrap.yaml
    

    Notice also how this ArgoCD application has been customized to use the GitOps repository repoURL: https://github.com/prod-ref-guide/multi-tenancy-gitops.git.

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: bootstrap-single-cluster
      namespace: openshift-gitops
    spec:
      destination:
        namespace: openshift-gitops
        server: https://kubernetes.default.svc
      project: default
      source:
        path: 0-bootstrap/single-cluster
        repoURL: https://github.com/prod-ref-guide/multi-tenancy-gitops.git
        targetRevision: master
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
    

    Most importantly, see how path: 0-bootstrap/single-cluster refers to the 0-bootstrap/single-cluster folder within this repository. This will result in the creation of individual ArgoCD applications to manage our cluster resources.

    Access the 0-bootstrap/single-cluster/kustomization.yaml:

    cat 0-bootstrap/single-cluster/kustomization.yaml
    

    Let us for now only deploy infra resources to the cluster. Open 0-bootstrap/single-cluster/kustomization.yaml and comment out the 2-services/2-services.yaml and 3-apps/3-apps.yaml as follows:

    resources:
    - 1-infra/1-infra.yaml
    # - 2-services/2-services.yaml
    # - 3-apps/3-apps.yaml
    patches:
    - target:
        group: argoproj.io
        kind: Application
        labelSelector: "gitops.tier.layer=gitops"
      patch: |-
        - op: add
          path: /spec/source/repoURL
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops.git
        - op: add
          path: /spec/source/targetRevision
          value: master
    - target:
        group: argoproj.io
        kind: AppProject
        labelSelector: "gitops.tier.layer=infra"
      patch: |-
        - op: add
          path: /spec/sourceRepos/-
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops.git
        - op: add
          path: /spec/sourceRepos/-
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops-infra.git
    - target:
        group: argoproj.io
        kind: AppProject
        labelSelector: "gitops.tier.layer=services"
      patch: |-
        - op: add
          path: /spec/sourceRepos/-
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops.git
        - op: add
          path: /spec/sourceRepos/-
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops-services.git
    - target:
        group: argoproj.io
        kind: AppProject
        labelSelector: "gitops.tier.layer=applications"
      patch: |-
        - op: add
          path: /spec/sourceRepos/-
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops.git
        - op: add
          path: /spec/sourceRepos/-
          value: https://github.com/prod-ref-guide/multi-tenancy-gitops-apps.git
    

    Commit and push changes to your git repository:

    git add .
    git commit -s -m "Using only infra"
    git push origin $GIT_BRANCH
    

    The changes have now been pushed to your GitOps repository:

    Enumerating objects: 9, done.
    Counting objects: 100% (9/9), done.
    Delta compression using up to 8 threads
    Compressing objects: 100% (5/5), done.
    Writing objects: 100% (5/5), 456 bytes | 456.00 KiB/s, done.
    Total 5 (delta 4), reused 0 (delta 0)
    remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
    To https://github.com/prod-ref-guide/multi-tenancy-gitops.git
       e3f696d..ea3b43f  master -> master
    
  5. Apply ArgoCD bootstrap.yaml

    Recall that you pushed the customized local copy of the GitOps repository to your GitHub account. The repository contains a bootstrap-single-cluster ArgoCD application that is watching this repository and using its contents to manage the cluster.

    When the bootstrap-single-cluster ArgoCD application is applied to the cluster, it will continuously ensure that all the activated resources are applied to the cluster.

    Apply the bootstrap YAML to the cluster:

    oc apply -f 0-bootstrap/single-cluster/bootstrap.yaml
    

    Kubernetes will confirm that the bootstrap ArgoCD application has been created:

    application.argoproj.io/bootstrap-single-cluster created
    

    The bootstrap ArgoCD application will watch the 0-bootstrap/single-cluster folder in our GitOps repository on GitHub.

    In this way, as resources are added to the infrastructure, service and application folders, they will be deployed to the cluster automatically.

    This is therefore the only direct cluster operation we need to perform; from now on, all cluster operations will be performed via Git operations to this repository.

  6. Verify the bootstrap deployment

    Verify that the bootstrap ArgoCD application is running with the following command:

    oc get app/bootstrap-single-cluster -n openshift-gitops
    

    You should see that the bootstrap application was recently updated:

    NAME                       SYNC STATUS   HEALTH STATUS
    bootstrap-single-cluster   Synced        Healthy
    

    HEALTH_STATUS may temporarily show Missing; simply re-issue the command to confirm it moves to Healthy.

  7. Using the UI to view the newly deployed ArgoCD applications

    In the previous section of this chapter you logged on to the ArgoCD web console. Switch back to that console, refresh the page and you should see the bootstrap-single-cluster ArgoCD application together with many other ArgoCD applications:

    argocd61

    (You may need to select List view rather than the Tiles view.)

    We can see that eleven ArgoCD applications have been deployed to the cluster as a result of applying bootstrap.yaml. In the next section of the tutorial, we'll examine these applications to see how and why they were created, but for now let's focus on one of them -- the namespace-ci ArgoCD application.

  8. Examining the namespace-ci ArgoCD application resources

    Let's examine the Kubernetes resources applied to the cluster by the namespace-ci ArgoCD application.

    In the ArgoCD application list, click on namespace-ci:

    argocd62

    (You may need to clear filters to see this screenshot.)

    The directed graph shows that the namespace-ci ArgoCD app has created 4 Kubernetes resources; our ci namespace and three role bindings.

  9. Verify the namespace using the oc CLI

    We've seen the new namespace definition in the GitOps repository and visually in the ArgoCD UI. Let's also verify it via the command line:

    Type the following command:

    oc get namespace ci -o yaml
    

    This will list the full details of the ci namespace:

    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"namespace-ci"},"name":"ci"},"spec":{}}
        openshift.io/sa.scc.mcs: s0:c27,c9
        openshift.io/sa.scc.supplemental-groups: 1000720000/10000
        openshift.io/sa.scc.uid-range: 1000720000/10000
      creationTimestamp: "2021-08-31T15:27:32Z"
      labels:
        app.kubernetes.io/instance: namespace-ci
      managedFields:
      - apiVersion: v1
        fieldsType: FieldsV1
        fieldsV1:
          f:metadata:
            f:annotations:
              .: {}
              f:kubectl.kubernetes.io/last-applied-configuration: {}
            f:labels:
              .: {}
              f:app.kubernetes.io/instance: {}
          f:status:
            f:phase: {}
        manager: argocd-application-controller
        operation: Update
        time: "2021-08-31T15:27:32Z"
      - apiVersion: v1
        fieldsType: FieldsV1
        fieldsV1:
          f:metadata:
            f:annotations:
              f:openshift.io/sa.scc.mcs: {}
              f:openshift.io/sa.scc.supplemental-groups: {}
              f:openshift.io/sa.scc.uid-range: {}
        manager: cluster-policy-controller
        operation: Update
        time: "2021-08-31T15:27:32Z"
      name: ci
      resourceVersion: "2255607"
      selfLink: /api/v1/namespaces/ci
      uid: fff6b82b-6318-4828-83bb-ade4e8e3c0cf
    spec:
      finalizers:
      - kubernetes
    status:
      phase: Active
    

    Notice how manager: argocd-application-controller identifies that this namespace was created by ArgoCD.

    It's important to understand the sequence of actions. We simply deployed the bootstrap-single-cluster ArgoCD application, and it ultimately resulted in the creation of the namespace-ci ArgoCD application which created the ci namespace.

    We don't directly apply resources to the cluster once the bootstrap-single-cluster ArgoCD application has been applied; the cluster state is determined by an ArgoCD application YAML in the corresponding application, service or infrastructure related folders. It's these ArgoCD applications that create and manage the underlying Kubernetes resources using the GitOps repository as the source of truth.

  10. Understanding the namespace-ci ArgoCD application

    Let's examine the ArgoCD application namespace-ci to see how it created the ci namespace and three role bindings in the cluster.

    Issue the following command to examine its YAML:

    cat 0-bootstrap/single-cluster/1-infra/argocd/namespace-ci.yaml
    

    Notice that apiVersion and kind identify this as an ArgoCD application:

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: namespace-ci
      labels:
        gitops.tier.layer: infra
      annotations:
        argocd.argoproj.io/sync-wave: "100"
    spec:
      destination:
        namespace: ci
        server: https://kubernetes.default.svc
      project: infra
      source:
        path: namespaces/ci
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
    

    Most importantly, see how this namespace-ci ArgoCD application monitors the folder path: namespaces/ci in https://github.com/prod-ref-guide/multi-tenancy-gitops-infra.git; it then applies the contents of this folder to the cluster whenever its content changes.

    Notice the SyncPolicy of automated; any changes to this folder will automatically be applied to the cluster; we do not need to perform a manual Sync operation from the ArgoCD UI.

  11. Examine the ci namespace YAML

    To examine this, navigate to multi-tenancy-gitops-infra repository you cloned previously.

    cd $GIT_ROOT
    cd multi-tenancy-gitops-infra
    

    Let's examine the ci namespace YAML in the namespaces/ci folder:

    cat namespaces/ci/namespace.yaml
    

    It's a very simple YAML:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: ci
    spec: {}
    

    This is a YAML that we would normally apply to the cluster manually or via a script. However, when we use GitOps, we push the ArgoCD application that uses this YAML to GitHub, and it applies the ci namespace YAML to the cluster. This is the essence of GitOps; we declare what we want to appear in the cluster using Git and ArgoCD synchronizes the cluster with this declaration.

  12. Examine the ci rolebinding YAML

    Now that we've seen how the namespace was created, let's see how the three other rolebindings were created by the namespace-ci ArgoCD application.

    In the same namespace/ci folder as the ci namespace YAML, there is a rolebinding.yaml file. This file will be also applied to the cluster by the namespace-ci ArgoCD application which is continuously watching this folder.

    Examine this file with the following command:

    cat namespaces/ci/rolebinding.yaml
    

    This YAML is slightly more complex than the namespace YAML:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: system:image-puller-dev
      namespace: ci
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:image-puller
    subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: Group
        name: system:serviceaccounts:dev
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: system:image-puller-staging
      namespace: ci
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:image-puller
    subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: Group
        name: system:serviceaccounts:staging
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: system:image-puller-prod
      namespace: ci
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:image-puller
    subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: Group
        name: system:serviceaccounts:prod
    ---
    

    However its structure is quite straightforward; look carefully and you'll see that there are indeed three rolebindings defined in this YAML. Each of these rolebindings binds different roles to different service accounts associated with components that will run within the ci namespace, such as Tekton pipelines for example.

    We'll see later how these rolebindings are important; they limit the operations that can be performed by the ci namespace, creating a well governed cluster.

    This confirms why we saw four resources created by the namespace-ci ArgoCD application in the ArgoCD UI: one namespace and three rolebindings.

    Again, notice the pattern: a single ArgoCD application manages one or more Kubernetes resources in the cluster -- using one or more YAML files in which those resources are defined.

  13. The bootstrap-single-cluster ArgoCD application in more detail

    In the ArgoCD UI Applications view, click on the bootstrap-single-cluster application:

    argocd63

    You can see the bootstrap application creates two Kubernetes resources, the infra ArgoCD application and the infra ArgoCD project.

    An ArgoCD project is a mechanism by which we can group related resources; we keep all our ArgoCD applications that manage infrastructure in the infra project. Later, we'll create a services project for the ArgoCD applications that manage the services we want deployed to the cluster such as the CP4S application.

  14. The infra ArgoCD application

    Let's examine the infra ArgoCD application in more detail to see how it works.

    In the ArgoCD UI Applications view, click on the open application icon for theinfra application:

    argocd64

    We can see that the infra ArgoCD application creates 9 ArgoCD applications, each of which is responsible for applying specific YAMLs to the cluster according to the folder the ArgoCD application is watching.

    It's the infra ArgoCD application that watches the 0-bootstrap/single-cluster/1-infra/argocd folder for ArgoCD applications that apply infrastructure resources to our cluster. It was the infra application that created the namespace-ci ArgoCD application which manages the ci namespace that we've been exploring in this section of the tutorial.

    We'll continually reinforce these relationships as we work through the tutorial. You might like to spend some time exploring the ArgoCD UI and ArgoCD YAMLs before you proceed, though it's not necessary, as you'll get lots of practice as we proceed.


Managing change

We end this topic by exploring how ArgoCD provides some advanced resource management features. We'll focus on two aspects of how to manage resources:

  • Dynamic: We'll see how ArgoCD allows us to easily and quickly update any resource in the cluster. We're focussing on infrastructure components like namespaces, but the same principles will apply to applications, databases, workflow engines and messaging systems -- any resource within the cluster.

  • Governed: While we want to be agile, we also want a well governed system. As we've seen, GitOps allows us to define the state of the cluster from a git repository -- but can we also ensure that the cluster stays that way? We'll see how ArgoCD helps with configuration drift -- ensuring that a cluster only changes when properly approved.


Dynamic updates

  1. Locate your GitOps repository

    If necessary, change to the root of your GitOps repository, which is stored in the $GIT_ROOT environment variable.

    Issue the following command to change to your GitOps repository:

    cd $GIT_ROOT
    cd multi-tenancy-gitops
    


  2. Customize the web console banner

    Examine the banner in the OpenShift web console:

    argocd20

    We're going to use GitOps to modify this banner dynamically.


  3. The banner YAML

    This banner properties are defined by the YAML in /0-bootstrap/single-cluster/1-infra/argocd/consolenotification.yaml. This YAML is currently being used by the cntk-consolenotification ArgoCD application that was deployed earlier.

    We can examine the YAML with the following command:

    cat 0-bootstrap/single-cluster/1-infra/argocd/consolenotification.yaml
    

    which shows the banner properties are part of the ConsoleNotification custom resource:

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: cntk-consolenotification
      labels:
        gitops.tier.layer: infra
      annotations:
        argocd.argoproj.io/sync-wave: "100"
      finalizers:
        - resources-finalizer.argocd.argoproj.io
    spec:
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
      destination:
        namespace: openshift-gitops
        server: https://kubernetes.default.svc
      project: infra
      source:
        path: consolenotification
        helm:
          values: |
            ocp-console-notification:
              ## The name of the ConsoleNotification resource in the cluster
              name: "banner-env"
              ## The background color that should be used for the banner
              backgroundColor: teal
              ## The color of the text that will appear in the banner
              color: "'#fff'"
              ## The location of the banner. Options: BannerTop, BannerBottom, BannerTopBottom
              location: BannerTop
              ## The text that should be displayed in the banner. This value is required for the banner to be created
              text: "Cluster Description"
    

    See how the banner at the top of the screen:

    • contains the text Cluster Description
    • is located at top of the screen
    • has the color teal


  4. Modify the YAML for this banner

    Let's now change this YAML

    In your editor, modify this YAML and change the below fields as follows:

    ocp-console-notification:
      ## The name of the ConsoleNotification resource in the cluster
      name: "banner-env"
      ## The background color that should be used for the banner
      backgroundColor: red
      ## The color of the text that will appear in the banner
      color: "'#fff'"
      ## The location of the banner. Options: BannerTop, BannerBottom, BannerTopBottom
      location: BannerTop
      ## The text that should be displayed in the banner. This value is required for the banner to be created
      text: "Production Reference Guide"
    

    It's clear that our intention is to modify the banner's backgroundColor:red and text: Production Reference Guide to the newly specified values. If you look at the diff:

    git diff
    

    you should see the following:

    diff --git a/0-bootstrap/single-cluster/1-infra/argocd/consolenotification.yaml b/0-bootstrap/single-cluster/1-infra/argocd/consolenotification.yaml
    index 30adf1a..596e821 100644
    --- a/0-bootstrap/single-cluster/1-infra/argocd/consolenotification.yaml
    +++ b/0-bootstrap/single-cluster/1-infra/argocd/consolenotification.yaml
    @@ -26,10 +26,10 @@ spec:
              name: "banner-env"
              ## The background color that should be used for the banner
    -         backgroundColor: teal
    +         backgroundColor: red
              ## The color of the text that will appear in the banner
              color: "'#fff'"
              ## The location of the banner. Options: BannerTop, BannerBottom, BannerTopBottom
              location: BannerTop
              ## The text that should be displayed in the banner. This value is required for the banner to be created
    -         text: "Cluster Description"
    +         text: "Production Reference Guide"
    


  5. Make the web console YAML change active

    Let's make these changes visible to the cntk-consolenotification ArgoCD application via GitHub.

    Add all changes in the current folder to a git index, commit them, and push them to GitHub:

    git add .
    git commit -s -m "Modify console banner"
    git push origin $GIT_BRANCH
    

    You'll see the changes being pushed to GitHub:

    Enumerating objects: 13, done.
    Counting objects: 100% (13/13), done.
    Delta compression using up to 8 threads
    Compressing objects: 100% (7/7), done.
    Writing objects: 100% (7/7), 670 bytes | 670.00 KiB/s, done.
    Total 7 (delta 5), reused 0 (delta 0)
    remote: Resolving deltas: 100% (5/5), completed with 5 local objects.
    To https://github.com/tutorial-org-123/multi-tenancy-gitops.git
       a1e8292..b49dff5  master -> master
    

    Let's see what effect they have on the web console.


  6. A dynamic change to the web console

    You can either wait for ArgoCD to automatically sync the cntk-consolenotification application or manually Refresh and Sync the infra application yourself:

    argocd21

    Returning to the OpenShift web console, you'll notice changes.

    argocd22

    Notice the dynamic nature of these changes; we updated the console YAML, pushed our changes to our GitOps repository and everything else happened automatically.

    As a result, our OpenShift console has a new banner color and text. This is a simple yet effective demonstration of how we can quickly rollout changes very visible changes.


Configuration drift

  1. Governing changes to the dev namespace

    Let's now look at how ArgoCD monitors Kubernetes resources for configuration drift, and what happens if it detects an unexpected change to a monitored resource.

    Don't worry about the following command; it might seem drastic and even reckless, but as you'll see, everything will be OK.

    Let's delete the dev namespace from the cluster:

    oc get namespace dev
    oc delete namespace dev
    

    See how the active namespace:

    NAME   STATUS   AGE
    dev    Active   2d18h
    

    is deleted:

    namespace "dev" deleted
    

    We can see that the dev namespace has been manually deleted from the cluster.


  2. GitOps repository as a source of truth

    If you switch back to the ArgoCD UI, you may see that ArgoCD has detected a configuration drift:

    • a resource is Missing (the dev namespace)
    • namespace-dev therefore is OutOfSync
    • namespace-dev is therefore Syncing with the GitOps repository

    argocd23

    After a while we'll see that namespace-dev is Healthy and Synced:

    argocd24

    ArgoCD has detected a configuration drift, and resynched with GitOps repository, re-applying the dev namespace to the cluster.

    Note

    You may not seethe first screenshot if ArgoCD detects and corrects the missing dev namespace before you get a chance to switch to the ArgoCD UI. Don't worry, you can try the delete operation again!


  3. The restored dev namespace

    Issue the following command to determine the status of the dev namespace:

    oc get namespace dev
    

    which confirms that the dev namespace has been re-instated:

    NAME   STATUS   AGE
    dev    Active   115s
    

    Note that it is a different instance of the dev namespace; as indicated by its AGE value.

    Notice the well governed nature of these changes; GitOps is our source of truth about the resources deployed to the cluster. ArgoCD restores any resources that suffer from configuration drift to their GitOps-defined configuration. It doesn't really matter whether the change was accidental or not, ArgoCD considers any change to a managed resource (e.g. the dev namespace) as invalid unless it's synced with its source of truth -- your GitOps repository.

    We'll see more about well governed changes when we look at Tekton pipelines that build and test any changes before they are deployed.

Congratulations!

You've used ArgoCD and the GitOps repository to set up ci, tools and dev namespaces. You've seen how to create ArgoCD applications that watch their respective GitOps namespace folders for details of the namespace resources they should apply to the cluster. You've seen how you can dynamically change deployed resources by updating the resource definition in the GitOps repository. Finally, you've experience how ArgoCD keeps the cluster synchronized with the GitOps repository as a source of truth; any unexpected configuration drift will be corrected without intervention.

In the next section of this tutorial, we're going to deploy some services into the infrastructure namespaces we've created in this topic. These services will include Artifactory and MQ operator.