Application deployment with Kubernetes Cluster - Data360_DQ+ - 12.0

Data360 DQ+ AWS Installation

Product type
Software
Portfolio
Verify
Product family
Data360
Product
Data360 DQ+
Version
12.0
ft:locale
en-US
Product name
Data360 DQ+
ft:title
Data360 DQ+ AWS Installation
Copyright
2024
First publish date
2016
ft:lastEdition
2024-12-12
ft:lastPublication
2024-12-12T10:33:57.869000

Prerequisites:

  • You have created the required infrastructure, and you have a copy of the dqplus.properties and pw.properties files that were created as part of this process (see Creating the infrastructure). Ensure that you have a copy of the latest versions of these files.
  • Your Precisely representative must have provided you with a <build>-k8s-dist.zip file.
  • If you are installing from a separate virtual machine, also known as a maintenance node, you need at least 32 GB of free disk space, to deploy and to run server utilities later.
  • Execute the command to configure kubectl with K8s cluster : aws eks update-kubeconfig --region {region_name} --name {Cluster_name}.Then check and confirm the values by executing below commands:kubectl get svckubectl get node
  1. Verify all proprties file required by the installer specific to K8s deployment:
  2. You will also need to login into Docker. To do this execute the following command after changing region and ECR registry hostname:

    For example: aws ecr get-login-password --region us-east-1 --profile my-profile-name | docker login --username AWS --password-stdin 0123456.dkr.ecr.us-east-1.amazonaws.com

  3. Execute the shell script to push the images to AWS ECR.

    sh pushimages.sh {CLOUD_TYPE(AWS/GCP/AZURE)} {AWS_ACCOUNT_ID} {AWS_ECR_REGION} {GCP_PROJECT_ID} {GCP_REGION}

    For example, for AWS: sh pushimages.sh "AWS" "0123456" "us-east-1"

  4. Go to the deployment directory which is at the top level of the unzipped file.
  5. Create a folder with the name "<deployment_ID>" in <version>-k8s-dist/deployment/environments/
  6. Copy the properties files dqplus.properties and pw.properties which were produced during the creation of the infrastructure into <deployment_ID>/<deployment_ID>.properties and <deployment_ID>/pw.properties respectively.
  7. Create an overrides folder if you have one of the following:
    1. Extra JDBC that you require.
    2. SecuPi tokenization product files for obfuscating dates names and values.
    3. Any Java libraries for plugins and other purposes needed to run .
  8. Initialize the database schema and populate it with initial data.

    Ensure that the gradle.sh script is executable by running these commands from the deployment folder:

    chmod +x gradle.sh

    chmod +x gradle-dist/bin/gradle

    Then initialize the database schema:

    gradle.sh --info currentBuild to-<deployment_ID> initialize
  9. Run predeploy Gradle task from the deployment directory:

    gradle.sh --info currentBuild to-<deployment_ID> predeploy

    where <deployment_ID> is the name of the folder with the two property files.

    This step will create dqplus-extension Docker image based on dqplus-os image. If this property is false (default).

    PUSH_EXTENSION_IMAGE_TO_DOCKER=false

    you will need to push the image to your docker image repository. If it is true the script will push it to AWS ECR repository under this tag:

    DQPLUS_EXTENSION_IMAGE=051704478360.dkr.ecr.us-east-1.amazonaws.com/dqplus-extension:202209021547-dev

    where 'dev' correponds to the value of KUBERNETES_ENV_TYPE property.

    This image will contain master key store file, tokenization product files (SecuPi or Protegrity), override files (e.g. extra JDBC drivers etc).

    It will also create a values.yaml file in the kubernetes/dqplus-chart folder with the values taken from the <deployment_ID>.properties and pw.properties files.

  10. Inside the kubernetes/deploy-helm/dqplus-chart folder you will find Helm chart for deployment into Kubernetes cluster.

    You will need to have the Kubernetes cluster created ahead of time for the deployment to work.

  11. If your organization prefers to create load balancer and DNS record on its own, then you will have to disable its creation in the Helm chart.

    Edit dqplus-chart/templates/dqplus-deployment.yaml and comment out the following lines

    {{ include "cert-manager" . }}

    {{ include "v2_4_3_full" . }}

    {{ include "v2_4_3_ingclass" . }}

    {{ include "alb_ingress" . }}

    also remove the dqplus-chart/crds folder which contains the yaml files used by Helm for creation of Custom Resource Definitions.These CRDs are used by cert-manager and v2_4_3_full templates.

    Then follow instructions at these URLs to create load balancer on your own: https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.htmlhttps://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html

    Since the v2_4_3_full template is responsible for creation of Kubernetes service account in non-default Kubernetes namespace, your load balancer should be created with this in mind.

    The namespace is configured using TARGET_NAMESPACE property. If you would like Helm to create the load balancer, you don't need to edit the dqplus-chart/templates/dqplus-deployment.yaml.

  12. Execute Helm chart from the deploy-helm folder with command similar to this one.

    helm install myrelease ./dqplus-chart --disable-openapi-validation --debug -f dqplus-chart/values.yaml.

    If you need to run an update after the initial install use the upgrade command e.g. helm upgrade myrelease ./dqplus-chart --disable-openapi-validation --debug -f dqplus-chart/values.yaml

    where myrelease is the name of Helm release. You can specify a different release name if you like but it must be the same in install and upgrade.

  13. The Helm chart installation will take up to 10 minutes to complete.

    While this is going on you will need to ask you network admin to create a DNS entry which maps the hostname specified in the DEPLOY_HOST_URL value in the values-aws.yaml to the DNS hostname of the AWS Application Load Balancer created during installation.

    The latter can be found by running a kubectl ingress command like this:

    kubectl get ingress dqplus-ingress -n dqplus-dev

    where dqplus-dev is the name of the Kuberneates namespace. The ALB address will appear in the output under ADDRESS column, for example: k8s-dqplusde-dqplusin-7f318bb8fa-1032032061.us-east-1.elb.amazonaws.com

    However, since creation of load balancer takes sometimes 5 minutes you need to repeat execution of this command until the address is available.

  14. Test access to the instance by accessing this URL in the browser:

    https://cafe-kube.infogix.com/desktop/index.html

    You should see a login screen.