Pre-installation steps - Data360_DQ+ - 11.X

Data360 DQ+ Enterprise Installation

Product type
Product family
Data360 DQ+
Product name
Data360 DQ+
Data360 DQ+ Enterprise Installation
First publish date

There are a number of tasks that you need to complete to prepare your machines prior to installing Data360 DQ+.

Select a maintenance node

Before installation, you should select a machine within your cluster to use as your "Maintenance Machine" and make note of its IP address. This will be the machine on which the majority of installation is performed.

Setting up your machines

Once your machines are up and running, you will need to do the following, in order to prepare each machine for installation.

Set up /etc/hosts file (optional)

If you are not using DNS and you have not set up the mapping of IP addresses to host names for the machines used by Data360 DQ+, edit the etc/hosts file to add the entries. The mappings should be set up for each machine and should contain entries of all the remote machines each machine should be able to perform an SSH connection with.

Set up an operating system user

You will need to create an operating system user on each machine that runs Data360 DQ+. For simplicity, naming this operating system user 'sagacity' is recommended. In the Performing installation section of the guide, you will then add this user to the file.

  1. Create a group and user on the machine. For example, on RedHat Linux, to add a user with username sagacity in the sagacity group, this is done with the following command:

    sudo useradd –u 5000 sagacity

  2. Repeat this command on each of your instances.

Enable password-less sudo access (optional)

After the new user is created, you can optionally enable password-less sudo access.

To do so, run the following command and then edit the sudoers file as shown below.

Note: If you are prompted for a password, you should run visudo as the root user.

sudo visudo


If you do not want to set up password-less sudo access, you can add the following to the sudoers file instead:

sagacity All=(ALL) ALL

Switch to newly created user before proceeding.

Before proceeding, it is very important to switch from the root user to the newly created operating system user, on each node. This will prevent you from accidentally deleting important files.

To become the new user, type the following command, where sagacity is the name you have chosen for your new user:

su - sagacity

To ensure that you have successfully switched users, verify that the whoami command returns the username you expected.

Switch user

Setting up SSH keys

After you have created your Sagacity operating system user, you will need to setup password-less SSH access:

  • From the Maintenance Machine to other machines using the sagacity user.
  • From the Maintenance Machine to itself using the sagacity user.
  • From all remote machines to the Maintenance Machine, using the sagacity user.

To do so, perform the following steps.

Note: If you are running your cluster on AWS, the ssh-copy-id command used in this section will not work. If this is the case, see Setting up SSH keys when running on AWS.

Set up SSH for the sagacity user

Generate the sagacity user’s key

If you do not already have a public key to use for the sagacity user, you can generate one with this command:

sudo -u sagacity ssh-keygen -m PEM -t rsa -b 2048

Distribute sagacity user public key to cluster

Once you have a Sagacity key, you’ll need to distribute the public key to all nodes on the cluster, including the maintenance node. You can do this by using commands such as:

sudo -u sagacity ssh-copy-id ${maint_ip_addr} sudo -u sagacity ssh-copy-id ${cluster_member}

Setting up SSH keys when running on AWS

Setting up SSH for the maintenance machine to itself

Commands for the sagacity user:

sudo -u sagacity ssh-keygen -m PEM -t rsa -b 2048

sudo -u sagacity vi ~sagacity/.ssh/authorized_keys

(paste contents of ~/.ssh/

(Save and exit.)

ssh <maintenance machine IP address>

Setting up SSH from the maintenance machine to all other machines in the cluster

Commands for the sagacity user:

sudo -u sagacity ssh-keygen -m PEM -t rsa -b 2048

vi /home/sagacity/.ssh/authorized_keys

(Paste contents of ~/.ssh/ for the sagacity user from maintenance box)

(Save and exit.)

chmod 0600 ~/.ssh/authorized_keys

This allows the sagacity user to SSH to this machine as sagacity user.

Setting up SSH from other machines in the cluster to the maintenance machine

You also need to make it so that every remote machine in the cluster can SSH to the Maintenance Machine using the sagacity user:

  1. Log into each machine as the sagacity user.
  2. Generate a public key for the machine.
  3. Paste the contents of the public key into the authorized_keys file of the Maintenance Machine.

    sudo -u sagacity ssh-keygen -m PEM -t rsa -b 2048

    [sagacity@maintenance-box-ip ~]$ vi ~/.ssh/authorized_keys

    (paste contents of from remote box)

    (Save and exit.)

    Alternatively, after generating the public key, you could perform the following steps to avoid any potential copy-paste errors:
    1. scp .ssh/ remoteMachine:.ssh/
    2. ssh remoteMachine
    3. cd .ssh
    4. cat >> authorized_keys
    5. chmod 600 authorized_keys


After setting up SSH for the sagacity user, you should verify that SSH connections can be made between hosts without the use of a password, as follows:

SSH verification for the sagacity user

  1. sudo -u sagacity ssh ${main_ip_addr} uptime
  2. sudo -u sagacity ssh ${cluster_member} uptime

If successful, the uptime command should return how long a server has been up and running without requesting a password.

For example:

sudo ssh sagacity@${maint_ip_addr} uptime

07:11:12 up 576 days, 21:26,0 users,load average: 0.00, 0.00, 0.00

Install and setup JDK11

Run these commands to install JDK11:

sudo su

mkdir -p /opt/java/current

cd /opt

rpm --import

curl -L -o /etc/yum.repos.d/corretto.repo

yum install -y java-11-amazon-corretto-devel

cp -r /usr/lib/jvm/java-*/* /opt/java/current

After executing the above commands, set JAVA_HOME as:

echo "export JAVA_HOME=/opt/java/current" >> /etc/bashrc

echo "export PATH=\$JAVA_HOME/bin:\$PATH" >> /etc/bashrc