Bare Metal Install Guide

Welcome to the Install Guide for installing Ultima Enterprise on Bare Metal(BM) environment. The purpose of this guide is to help you with the installation and configuration of the software in the BM environment. It begins by listing the recommended minimum BM requirements and then walks through the steps to bring up the Kubernetes cluster.

Note

This guide uses the word node to describe the BM server on which the Diamanti distribution is installed.

Minimum BM Requirements

Diamanti UE on BM has the following minimum BM requirements:

Resource

Minimum

Memory

64 GB

CPU

16 cores

Boot Drive

480 GB

Application Drive

4 x NVMe SSDs (minimum 200GB per SSD)

Networking

1 NIC

Installation Checklist

  1. The Diamanti Operating System ISO file for 3.6.2 release. Please contact Diamanti Support at support@diamanti.com to copy or download the package.

  2. Download the following rpms.

    diamanti-cx rpm

    diamanti-cx-docker-core

  3. Following configuration parameters are needed for the cluster.

    • 1 IP address for the cluster VIP

    • A minimum of 3 BMs provisioned using the steps listed below. The first 3 nodes will act as control-plane nodes and also as worker nodes.

    • 1 IP address per BM for SSH/Clustering

    • The DNS resolvable short name of the cluster nodes (hostname)

    • The DNS sub-domain for the cluster (POD DNS, eg: cluster.local)

    • Zone names (optional If the nodes are going to be in different zones)

  4. Following parameters are optional and can be skipped for PoC environments

    • Generate Cluster certificates. If not provided, self-signed certificates will be internally generated

      • ca-cert : Certificate Authority

      • server-cert : Cluster public key with the following SAN info - IP addresses : cluster virtual IP address, 10.0.0.1 (kubernetes clusterIP) - DNS: cluster short name, cluster name FQDN, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local

      • server-key : Cluster private key

    • External Load balancer Configuration if the VIP is managed by an external load-balancer - IP address : Virtual IP address - Ports :

      • 443 : Load-balanced to Quorum nodes port 7443

      • 6443 : Load-balanced to Quorum nodes port 6443

      • 12346 : Load-balanced to Quorum nodes port 12346

Installation Steps

There are 3 steps involved in getting the cluster up and running.

  1. Provisioning the Nodes (BMs)

    This includes provisioning of BMs, hostname, management/data interface and setting up the software.

  2. Creating the Cluster

    This includes creating the Kubernetes cluster using the BMs provisioned in the above step, setting up storage and networking (Diamanti CSI & CNI).

  3. Installing the Licenses

    The cluster is fully functional at this stage and the license expiry is set to 72 hours by default. The final step is to request for the licenses and install them to extend the subscription or trial beyond 72 hours.

Provisioning the Nodes (BMs)

This section provides detailed information about configuring a Diamanti node. These steps need to be repeated for every node that needs to be provisioned for the cluster. For example, you need to repeat the below steps three times to provision the three nodes.

Note

If you are using a DHCP-based network configuration, Diamanti expects the DHCP reservation to use the MAC address associated with the management interface.

Note

By default, the management interface is used for Kubernetes management traffic and pod data traffic. You can change this if you have an additional adapter/port available on the node and want to keep Kubernetes control traffic separate from pod data traffic.

Note

When configuring the Diamanti node, the following IP addresses are used internally and are not available for network configuration:

• 172.20.0.0/24 (Diamanti pod management network)
• 172.17.0.0/16 (Docker default network)

Follow these steps to setup the node:

  1. Install the BM using Diamanti provided ISO.

  2. Using a console terminal, log in to the node using the following credentials:

    User: diamanti
    Password: diamanti
    
  3. Verify the node name using the following command:

    $ hostname
    

    If the node name is diamanti, you need to manually configure the node name. You can change the hostname using the following command:

    $ sudo hostnamectl set-hostname <hostname>
    

    Note

    The hostname should not be a fully qualified domain name (FQDN)

  4. Run the following command to setup the software release:

    $ sudo rpm -ivh diamanti-cx*.rpm
    $ sudo rpm -ivh <docker-core-rpm-name>
    $ sudo reboot
    

    Note

    Diamanti strongly recommends that you change the password immediately after logging in. For details about completing this process, see Changing the Node Password in Advanced Configuration section.

  5. Verify the date, time, and timezone on the node using the following command:

    $ date
    

    To change the date, time, or timezone, see Configuring the Date, Time, or Timezone in Advanced Configuration section.

  6. Verify that the management IP address is configured on the interface and is reachable. If there are more than one interfaces, either create bonding of the interfaces or disable rest of the interfaces.

The output below is an example. Confirm that the link is up, and the IP configuration is present.

$ ifconfig

eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  inet 172.16.230.80  netmask 255.255.255.0  broadcast 172.16.230.255
  inet6 fe80::3eec:efff:fe1b:4edc  prefixlen 64  scopeid 0x20<link>
  ether 3c:ec:ef:1b:4e:dc  txqueuelen 1000  (Ethernet)
  RX packets 3489584041  bytes 4843699598117 (4.4 TiB)
  RX errors 0  dropped 165  overruns 0  frame 0
  TX packets 977256709  bytes 106125286970 (98.8 GiB)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



eno2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
  inet6 fe80::3eec:efff:fe1b:4edd  prefixlen 64  scopeid 0x20<link>
  ether 3c:ec:ef:1b:4e:dd  txqueuelen 1000  (Ethernet)
  RX packets 56491  bytes 19412024 (18.5 MiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 28591  bytes 6815792 (6.5 MiB)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Perform the following steps to create bonding:

  1. Run the following script to create a bonding.

    sudo diamanti-bonding.sh enable dhcp link
    
  2. Run the following command to reboot the node.

    sudo reboot
    

OR

To disable the second interface on the machine:

  1. Run the following command to make changes in the interface configuration file.

    sudo vi /etc/sysconfig/network-scripts/ifcfg-eno2
    
  2. Change the variable bootproto to none and onboot to no

  3. Run the following command to reboot the node.

    sudo reboot
    

If the IP configuration is not present, you need to manually complete this configuration. For more information, see Manually Configuring the Management IP Configuration in Advanced Configuration section. After completing this procedure, attempt to verify the management IP address configuration again.

Verify that the management IP address is reachable using the following command:

ping -c 4 <management-ip-address>

For example:

PING 172.16.230.80 (172.16.230.80) 56(84) bytes of data.
64 bytes from 172.16.230.80: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 172.16.230.80: icmp_seq=2 ttl=64 time=0.033 ms
64 bytes from 172.16.230.80: icmp_seq=3 ttl=64 time=0.032 ms
64 bytes from 172.16.230.80: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.230.80 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3084ms
rtt min/avg/max/mdev = 0.032/0.041/0.053/0.010 ms

Take the steps necessary to ensure that the management IP address is configured and reachable before continuing.

Determine the gateway IP address using the following command:

$ ip route list match 255.255.255.255
default via 172.16.230.1 dev eno1 proto dhcp metric 100

Ping the default gateway

ping -c 4 <gateway-ip-address>

For example:

  $ ping -c 4 172.16.230.1

  PING 172.16.230.1 (172.16.230.1) 56(84) bytes of data.
  64 bytes from 172.16.230.1: icmp_seq=1 ttl=255 time=0.564 ms
  64 bytes from 172.16.230.1: icmp_seq=2 ttl=255 time=0.593 ms
  64 bytes from 172.16.230.1: icmp_seq=3 ttl=255 time=0.601 ms
  64 bytes from 172.16.230.1: icmp_seq=4 ttl=255 time=0.516 ms

--- 172.16.230.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3069ms
rtt min/avg/max/mdev = 0.516/0.568/0.601/0.040 ms
  1. Verify that a Domain Name System (DNS) server is configured and reachable.

    The output shown below is an example.

    $ cat /etc/resolv.conf
    # Generated by NetworkManager
    search bos.diamanti.com
    nameserver 172.16.230.200
    

    Note

    The resolv.conf file should contain directives that specify the default search domains along with the list of IP addresses of nameservers (DNS servers) available for resolution.

    If DNS servers are not configured, you can choose to manually configure one or more DNS servers. For details about completing this process, see Manually Setting up the DNS Configuration in Advanced Configuration section.

  2. Verify that the nameserver is reachable using the following command:

    nslookup $HOSTNAME
    

    Example

    $ nslookup dssserv10
    Server:         172.16.230.200
    Address:        172.16.230.200#53
    
    Name:   dssserv10.bos.diamanti.com
    Address: 172.16.230.80
    

    Take the steps necessary to ensure that the nameserver is reachable before continuing. If the node’s hostname is not resolvable, there are two options. The first option is to fix the A record for the host in the DNS server. The second option, which should only be used when no DNS servers are available is to add all the nodes as entries in the /etc/hosts file.

  3. Reboot the node using the following command

    $ sudo reboot
    

Repeat this procedure for each node.

At this point, the nodes are ready to join the cluster.

Creating the Cluster

The next step is to create a cluster of Diamanti nodes. After a cluster is formed, the Diamanti software pools resources across all nodes in the cluster, enabling Kubernetes to efficiently schedule containers within your environment.

Note

SSH to one of the nodes that will be part of the cluster to run the below commands.

  1. Format the drives from the all nodes, if same nodes are being used for creating the cluster without OS installation.

    Note

    Skip this step when creating the cluster for the first time.

    Format drive will erase the data from the drives that will be used for storing the application data (PVs).

    Example:

    $ sudo format-dss-node-drives.sh -n <nodename>
    $ sudo reboot
    
  2. Creating the Cluster

    The dctl cluster create command is used to create the cluster. Flags with the Mandatory must be used in the creation command but flags with the Optional tag do not.

    Note

    All commands, except the dctl cluster create command, require administrators to be logged into the cluster (using the dctl login command).

    dctl
      -s <node name(hostname)/node IP address> # (Mandatory) the IP or DNS Short name of the node you are creating the cluster from.
      cluster create <cluster name> # Mandatory
      <node1,node2,node3…> # Mandatory
      --vip <virtual-IP> # Mandatory
      --poddns <cluster subdomain> # Mandatory
      --storage-vlan <vlan id> # Optional
      --admin-password # (Optional) will be asked to provide one if not specified in the command.
      --ca-cert <path to file> # Optional
      --tls-cert <path to file> # Optional
      --tls-key <path to file> # Optional
      --vip-mgmt <local|external> # (Optional) in case not given it will be local by default
    

    For Example:

    $ dctl -s dssserv10 cluster create dsstb4 dssserv10,dssserv11,dssserv12 --vip 172.16.230.122 --poddns cluster.local
    

    Note

    Do not add spaces or other whitespace characters when specifying the comma-seperated list of nodes (using DNS short names).

    Note

    The dctl cluster create command automatically adds an administrative user named admin. Using the default quorum size of three nodes, the first three nodes specified in the dctl cluster create command become master nodes by default.

    Once the cluster is created, login to cluster.

    $ dctl –s <VIP> login –u admin –p <password>
    
    dctl -s  172.16.230.122 login -u admin -p <password>
      Name            : dsstb4
      Virtual IP      : 172.16.230.122
      Server          : dsstb4.bos.diamanti.com
      WARNING: Thumbprint : bd ec 3d 8e dd 17 d2 6f fc fa 87 7c af d9 2a 4e d7 3a 10 50 47 6d b9 40 1b f2 70 3b b8 55 81 be
      [CN:diamanti-signer@1690788887, OU:[], O=[] issued by CN:diamanti-signer@1690788887, OU:[], O=[]]
      Configuration written successfully
      Successfully logged in
    

    Wait for the status of the nodes to be in Good state before moving to the next step.

    $ dctl cluster status
    

    Example:

    $ dctl cluster status
    Name            : dsstb4
    UUID            : b81489cf-2f74-11ee-b57a-3cecef1b4edc
    State           : Created
    Version         : 3.6.2 (62)
    Etcd State      : Healthy
    Virtual IP      : 172.16.230.122
    Pod DNS Domain  : cluster.local
    
    NAME        NODE-STATUS   K8S-STATUS   ROLE      MILLICORES   MEMORY            STORAGE         SCTRLS
                                                                                                  LOCAL, REMOTE
    dssserv10   Good          Good         master*   7100/48000   25.07GiB/192GiB   2.45GB/3.83TB   2/64, 4/64
    dssserv11   Good          Good         master    7100/48000   25.07GiB/192GiB   2.45GB/3.83TB   0/64, 2/64
    dssserv12   Good          Good         master    7200/48000   25.26GiB/192GiB   2.45GB/3.83TB   0/64, 2/64
    

    The cluster is up and can be accessed using the dctl CLI tool as above or through the browser at the URL: https://<Virtual IP>. The last two steps set up the storage and networking for the cluster.

  3. Setup Storage

    Diamanti Storage Stack (DSS) services services are enabled on each node by adding “diamanti.com/dssnode=medium” label, and setting its value to one of the supported DSS storage class.

    Example:

    $ for host in <hostname-1> <hostname-2> <hostname-3>
    do
       dctl node label ${host} diamanti.com/dssnode=medium
    done
    

    Once “diamanti.com/dssnode” label is added “diamanti-dssapp-<storage-class>” daemon-set will start the “diamanti-dssapp-<storage-class>” pod on each node. Verify that “diamanti-dssapp-<storage-class>” pod is in running state.

    $ kubectl -n diamanti-system get pods | grep dssapp
    diamanti-dssapp-medium-99s4j                                    1/1     Running   2             26h
    diamanti-dssapp-medium-fdj24                                    1/1     Running   0             26h
    diamanti-dssapp-medium-rkg2p                                    1/1     Running   2             26h
    

    Verify the storage status using “dctl drive list”.

    $ dctl drive list
    NODE        SLOT      S/N                  DRIVESET                               RAW CAPACITY   USABLE CAPACITY   ALLOCATED   FIRMWARE   STATE     SELF-ENCRYPTED
    dssserv10   0         PHLJ007102UZ1P0FGN   37f345f3-f6ef-4c94-8afb-8b66729e558f   1TB            957.78GB          613.46MB    VDV10170   Up        No
    dssserv10   1         PHLJ007102FZ1P0FGN   37f345f3-f6ef-4c94-8afb-8b66729e558f   1TB            957.78GB          613.47MB    VDV10170   Up        No
    dssserv10   2         PHLJ007101EM1P0FGN   37f345f3-f6ef-4c94-8afb-8b66729e558f   1TB            957.78GB          613.54MB    VDV10170   Up        No
    dssserv10   3         PHLJ007101DP1P0FGN   37f345f3-f6ef-4c94-8afb-8b66729e558f   1TB            957.78GB          613.63MB    VDV10170   Up        No
    dssserv11   0         PHLJ007102V71P0FGN   7c9fdc16-62d3-4f8d-85e8-eca43f6ec383   1TB            957.78GB          613.47MB    VDV10170   Up        No
    dssserv11   1         PHLJ007101D61P0FGN   7c9fdc16-62d3-4f8d-85e8-eca43f6ec383   1TB            957.78GB          613.63MB    VDV10170   Up        No
    dssserv11   2         PHLJ007102VM1P0FGN   7c9fdc16-62d3-4f8d-85e8-eca43f6ec383   1TB            957.78GB          613.46MB    VDV10170   Up        No
    dssserv11   3         PHLJ007101NT1P0FGN   7c9fdc16-62d3-4f8d-85e8-eca43f6ec383   1TB            957.78GB          613.54MB    VDV10170   Up        No
    dssserv12   0         PHLJ007101N71P0FGN   deb7ef26-7b43-4775-b479-0e574d5fab0a   1TB            957.78GB          613.46MB    VDV10170   Up        No
    dssserv12   1         PHLJ007102SL1P0FGN   deb7ef26-7b43-4775-b479-0e574d5fab0a   1TB            957.78GB          613.63MB    VDV10170   Up        No
    dssserv12   2         PHLJ007102T71P0FGN   deb7ef26-7b43-4775-b479-0e574d5fab0a   1TB            957.78GB          613.54MB    VDV10170   Up        No
    dssserv12   3         PHLJ007101EJ1P0FGN   deb7ef26-7b43-4775-b479-0e574d5fab0a   1TB            957.78GB          613.47MB    VDV10170   Up        No
    

    Also, verify the storage status using “dctl cluster status” command.

    $ dctl cluster status
    Name            : dsstb4
    UUID            : b81489cf-2f74-11ee-b57a-3cecef1b4edc
    State           : Created
    Version         : 3.6.2 (62)
    Etcd State      : Healthy
    Virtual IP      : 172.16.230.122
    Pod DNS Domain  : cluster.local
    
    NAME        NODE-STATUS   K8S-STATUS   ROLE      MILLICORES   MEMORY            STORAGE         SCTRLS
                                                                                                  LOCAL, REMOTE
    dssserv10   Good          Good         master*   7100/48000   25.07GiB/192GiB   2.45GB/3.83TB   2/64, 4/64
    dssserv11   Good          Good         master    7100/48000   25.07GiB/192GiB   2.45GB/3.83TB   0/64, 2/64
    dssserv12   Good          Good         master    7200/48000   25.26GiB/192GiB   2.45GB/3.83TB   0/64, 2/64
    
  4. Setup Overlay Network

    Before applications can be deployed, a private (overlay) network needs to be configured. Diamanti CNI assigns IP addresses to the applications from this pool.

    $ dctl network overlay create default -s 172.30.0.0/16 --isolate-ns=false --set-default
    

    Note

    The subnet configured using the above command is a private subnet (VxLAN encapsulated) and does not require to be routed.

Installing the Licenses

The nodes provisioned in the above steps come pre-configured with a 72 hour license. To request Trial/PoC license or to request for subscription license, send an email request to Diamanti Support. The email must include the output of /etc/machine-id from all the nodes:

$ cat /etc/machine-id
a5c52cfa5445404ab9e48ad823b57945

You will receive the license files for all the nodes in the pattern of PassiveCert - UE_a5c52cfa5445404ab9e48ad823b57945.txt.

Copy the respective license files at the path /etc/diamanti/license/ on the respective node for the all nodes.

Login to the cluster:

$ dctl -s <VIP address> login -u admin -p <password>

Activate the license:

$ dctl node license activate
License activation process started for node dssserv10
License activation process started for node dssserv11
License activation process started for node dssserv12

Check the license list and status. Below example is for trial license type:

$ dctl node license list
dssserv10   a5c52cfa5445404ab9e48ad823b57945   Trial     Active    30 Oct 2023
dssserv11   5725841532fc42f6b3517258403013a8   Trial     Active    30 Oct 2023
dssserv12   e5913dc2b34941d886c8db4378a0abd1   Trial     Active    30 Oct 2023

$ dctl node license status
Licensing check every:    12h0m0s
Licensing delay:          72h0m0s
Licensing alerts start:   720h0m0s

NAME        HOST SERIAL NUMBER                 LICENSING STATUS   CHECK INTERVAL   DELAY     LAST CHECK
dssserv10   a5c52cfa5445404ab9e48ad823b57945   Active             12h0m0s          72h0m0s   20m
dssserv11   5725841532fc42f6b3517258403013a8   Active             12h0m0s          72h0m0s   20m
dssserv12   e5913dc2b34941d886c8db4378a0abd1   Active             12h0m0s          72h0m0s   20m

Advanced Configuration

This section describes how to perform advanced Diamanti node configurations.

Changing the Node Password

Diamanti strongly recommends that you change the password immediately after initially logging in to a Diamanti node.

If not already logged in to the node, using a console monitor, log in to the node using the following credentials:

User: diamanti
Password: diamanti

Change the password using the following command:

passwd

Example:

$ passwd
password for diamanti:
Changing password for user diamanti
New password:
Retype new password:

Configuring the Date, Time, or Timezone

You can configure the date, time, or time zone on a Diamanti node. To change the date and time, use the following command:

sudo date -s <new-date-and-time>

To change the time zone, use the following command:

sudo timedatectl set-timezone <new-time-zone>

Example:

$ sudo timedatectl set-timezone America/Los_Angeles

You can list all available time zones using the following command:

$ timedatectl list-timezones

Manually Configuring the Management IP Configuration

You can manually configure the management port using a static IP configuration. IP should be configured before creating the cluster. Using a console monitor that is logged in to the node, set the management port to use a static IP configuration. Edit the ifcfg-eno1 file using the following command:

$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eno1

In the file, change BOOTPROTO=dhcp to BOOTPROTO=none.

BOOTPROTO=none

Also, add the following lines to the bottom of the file:

IPADDR=<ip-address>
NETMASK=<netmask-address>
GATEWAY=<gateway-address>

Example:

IPADDR=172.16.230.80
NETMASK=255.255.255.0
GATEWAY=172.16.230.1

Enable the static IP configuration.

$ sudo ifdown eno1
$ sudo ifup eno1

Manually Setting up the DNS Configuration

You can manually configure one or more DNS servers in your environment, if needed.

  1. Check the nameserver entries that are currently configured using the following command:

    $ cat /etc/resolv.conf
    

    The resolv.conf file contains directives that specify the default search domains along with the list of IP addresses of name servers (DNS servers) available for resolution.

  2. Edit the resolv.conf file.

    sudo vi /etc/resolv.conf
    

    Add (or append) the search domains using the following format:

    search <dns-domain-1> ... <dns-domain-n>
    

    Add one or more name servers using the following format:

    nameserver <dns-server-1>
    .
    .
    .
    nameserver <dns-server-n>
    
  3. Verify that a DNS server is configured using the following command:

    $ cat /etc/resolv.conf
    # Generated by NetworkManager
    search bos.diamanti.com
    nameserver 172.16.230.200
    

Manually Configuring the DNS Mapping

If a DNS server is not available on the network, you need to manually update the /etc/hosts file on the Diamanti UE on BM node to provide name resolution information for all Diamanti nodes that will be in the cluster.

Do the following:

  • Using a console monitor that is logged in to the node, edit the /etc/hosts file using the following command:

    $ sudo vi /etc/hosts
    
  • In the file, manually add the IP address, node name, and fully qualified domain name (FQDN) of all Diamanti nodes that will be in the cluster. For example:

    172.16.230.80 dssserv10.example.com dssserv10
    172.16.230.82 dssserv11.example.com dssserv11
    172.16.230.84 dssserv12.example.com dssserv12
    

Manually Configuring the NTP Server

You can set the Network Time Protocol (NTP) servers used by the Diamanti UE on BM node.

  1. Using a console monitor that is logged in to the node, edit the chrony.conf file.

    $ sudo vi /etc/chrony.conf
    
  2. Delete unneeded servers from the chrony.conf file.

  3. Add the NTP servers to the file. Use the following format:

    server <ntp-server-FQDN> <ntp-server-ip-address> iburst

  4. Save the chrony.conf file.

Feature Matrix

This section provides a feature matrix outlining supported and unsupported features available with Diamanti UE on BM.

Feature

Diamanti UE on BM

VIP-based cluster management

Supported

User management (LDAP/RBAC)

Supported

Persistent storage

Supported

Mirroring/Snapshots

Supported

NFS volumes

Supported

Async replication

Supported

Overlay CNI

Supported

Helm package management

Supported

Air-gapped cluster

Supported