VM Install Guide
Welcome to the Install Guide for installing Ultima Enterprise on Vsphere VM environment. The purpose of this guide is to help you with the installation and configuration of the software in the VM environment. It begins by listing the recommended minimum VM requirements and then walks through the steps to bring up the Kubernetes cluster.
Note
This guide uses the word node to describe the VM on which the Diamanti distribution is installed.
Minimum VM Requirements
Diamanti UE on VM has the following minimum VM requirements:
Resource |
Minimum |
---|---|
Memory |
32GB |
CPU |
16 vCPUs |
Boot Drive |
128 GB |
Application Drive |
4x200 GB (datastore) |
Networking |
2 VNICs |
VMware ESXi |
6.7 |
Installation Checklist
The Diamanti UE OVA file for 3.6.2 release. Please contact Diamanti Support at support@diamanti.com to copy or download the package.
Following configuration parameters are needed for the cluster.
1 IP address for the cluster VIP
A minimum of 3 VMs provisioned using the steps listed below. The first 3 nodes will act as control-plane nodes and also as worker nodes.
1 IP address per VM for SSH/Clustering
The DNS resolvable short name of the cluster nodes (
hostname
)The DNS sub-domain for the cluster (POD DNS, eg: cluster.local)
Zone names (optional If the nodes are going to be in different zones)
Following parameters are optional and can be skipped for PoC environments
Generate Cluster certificates. If not provided, self-signed certificates will be internally generated
ca-cert : Certificate Authority
server-cert : Cluster public key with the following SAN info - IP addresses : cluster virtual IP address, 10.0.0.1 (kubernetes clusterIP) - DNS: cluster short name, cluster name FQDN, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local
server-key : Cluster private key
External Load balancer Configuration if the VIP is managed by an external load-balancer - IP address : Virtual IP address - Ports :
443 : Load-balanced to Quorum nodes port 7443
6443 : Load-balanced to Quorum nodes port 6443
12346 : Load-balanced to Quorum nodes port 12346
Installation Steps
There are 3 steps involved in getting the cluster up and running.
-
This includes provisioning of VMs, hostname, management/data interface and setting up the software.
-
This includes creating the Kubernetes cluster using the VMs provisioned in the above step, setting up storage and networking (Diamanti CSI & CNI).
-
The cluster is fully functional at this stage and the license expiry is set to 72 hours by default. The final step is to request for the licenses and install them to extend the subscription or trial beyond 72 hours.
Provisioning the Nodes (VMs)
This section provides detailed information about configuring a Diamanti node. These steps need to be repeated for every node that needs to be provisioned for the cluster. For example, you need to repeat the below steps three times to provision the three nodes.
Note
If you are using a DHCP-based network configuration, Diamanti expects the DHCP reservation to use the MAC address associated with the management interface.
Note
By default, the management interface is used for Kubernetes management traffic and pod data traffic. You can change this if you have an additional adapter/port available on the node and want to keep Kubernetes control traffic separate from pod data traffic.
Note
When configuring the Diamanti node, the following IP addresses are used internally and are not available for network configuration:
To setup a node:
Import the UE OVA image into the ESXi host.
Click on
Create/Register VM
. UnderSelect creation type
, selectDeploy a virtual machine from an OVF or OVA file
and clickNext
.Enter the name of the virtual machine. Drag and drop the OVA image file (or click to select the OVA file) and click
Next
.Select the Storage (datastore) and click
Next
.Select the Deployment options (default Disk provisioning is
Thin
). ClickNext
and thenFinish
to finish the VM provisioning.You may see
A required disk image was missing
warning message. It can be ignored.
The node will boot-up at this stage.
Using a console terminal, log in to the node using the following credentials:
User: diamanti Password: diamanti
Verify the node name using the following command:
$ hostname
If the node name is diamanti, you need to manually configure the node name. You can change the hostname using the following command:
$ sudo hostnamectl set-hostname <hostname>
Note
The hostname should not be a fully qualified domain name (FQDN)
Run the following command to setup the software release:
$ sudo install_dvx –y $ sudo reboot
Note
Diamanti strongly recommends that you change the password immediately after logging in. For details about completing this process, see Changing the Node Password in Advanced Configuration section.
Verify the date, time, and timezone on the node using the following command:
$ date
To change the date, time, or timezone, see Configuring the Date, Time, or Timezone in Advanced Configuration section.
Verify that the management IP address is configured and reachable.
$ ifconfig ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.7.203 netmask 255.255.224.0 broadcast 172.16.31.255 inet6 fe80::20c:29ff:feeb:63f6 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:eb:63:f6 txqueuelen 1000 (Ethernet) RX packets 96 bytes 7871 (7.6 KiB) RX errors 0 dropped 3 overruns 0 frame 0 TX packets 64 bytes 6389 (6.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens224: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.7.205 netmask 255.255.224.0 broadcast 172.16.31.255 inet6 fe80::20c:29ff:feeb:6300 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:eb:63:00 txqueuelen 1000 (Ethernet) RX packets 27 bytes 2195 (2.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1391 (1.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The output above is an example. Confirm that the link is up, and the IP configuration is present.
Perform the following steps to create bonding:
Run the following script to create a bonding.
sudo diamanti-bonding.sh enable dhcp link
Run the following command to reboot the node.
sudo reboot
OR
To disable the second interface on the machine:
Run the following command to make changes in the interface configuration file.
sudo vi /etc/sysconfig/network-scripts/ifcfg-eno2
Change the variable
bootproto
to none andonboot
to no.Run the following command to reboot the node.
sudo reboot
Static IP Configuration: If the IP configuration is not present, you need to manually complete this configuration. For more information, see Manually Configuring the Management IP Configuration in Advanced Configuration section. After completing this procedure, attempt to verify the management IP address configuration again.
Verify that the management IP address is reachable using the following command:
ping -c 4 <management-ip-address> For example: $ ping -c 4 172.16.19.142 PING 172.16.19.142 (172.16.19.142) 56(84) bytes of data. 64 bytes from 172.16.19.142: icmp_seq=1 ttl=64 time=0.313 ms 64 bytes from 172.16.19.142: icmp_seq=2 ttl=64 time=0.291 ms 64 bytes from 172.16.19.142: icmp_seq=3 ttl=64 time=0.220 ms 64 bytes from 172.16.19.142: icmp_seq=4 ttl=64 time=0.251 ms --- 172.16.19.142 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.220/0.268/0.313/0.041 ms
Take the steps necessary to ensure that the management IP address is configured and reachable before continuing.
Determine the gateway IP address using the following command:
ip route list match 255.255.255.255 default via 172.16.0.1 dev ens255f1
Ping the default gateway
ping -c 4 <gateway-ip-address>
For example:
$ ping -c 4 172.16.0.1 ping -c 4 172.16.0.1 PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data. 64 bytes from 172.16.0.1: icmp_seq=1 ttl=255 time=4.79 ms 64 bytes from 172.16.0.1: icmp_seq=2 ttl=255 time=0.602 ms 64 bytes from 172.16.0.1: icmp_seq=3 ttl=255 time=0.618 ms 64 bytes from 172.16.0.1: icmp_seq=4 ttl=255 time=6.73 ms --- 172.16.0.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3001ms rtt min/avg/max/mdev = 0.602/3.186/6.737/2.668 ms
Add Domain Name System (DNS) server details in the resolv.conf and verify that a DNS server is reachable.
The output shown below is an example.
cat /etc/resolv.conf ; generated by /usr/sbin/dhclient-script search eng.diamanti.com. nameserver 172.16.1.2
Note
The resolv.conf file should contain directives that specify the default search domains along with the list of IP addresses of nameservers (DNS servers) available for resolution.
If DNS servers are not configured, you can choose to manually configure one or more DNS servers. For details about completing this process, see Manually Setting up the DNS Configuration in Advanced Configuration section.
Verify that the nameserver is resolvable using the following command:
nslookup $HOSTNAME $ nslookup dsv1-vm-2 Server: 172.16.1.2 Address: 172.16.1.2#53 Name: dsv1-vm-2.eng.diamanti.com Address: 172.16.5.30
Take the steps necessary to ensure that the nameserver is reachable before continuing. If the node’s hostname is not resolvable, there are two options. The first option is to fix the A record for the host in the DNS server. The second option, which should only be used when no DNS servers are available is to add all the nodes as entries in the /etc/hosts file.
Reboot the node using the following command
$ sudo reboot
Repeat this procedure for each node.
At this point, the nodes are ready to join the cluster.
Creating the Cluster
Next, create a cluster of Diamanti nodes. After a cluster is formed, the Diamanti software pools resources across all nodes in the cluster, enabling Kubernetes to efficiently schedule containers within your environment.
To create a Cluster:
Run the following commands on one of the nodes in the cluster.
Format the drives from the all nodes, if same nodes are being used for creating the cluster without OVA deployment.
Note
Skip this step when creating the cluster for the first time.
Format drive will erase the data from the drives that will be used for storing the application data (PVs).
Example:
$ sudo format-dss-node-drives.sh -n <nodename> $ sudo reboot
Create the Cluster
The
dctl cluster create
command is used to create the cluster. Mandatory flagsNote
All commands, except the
dctl cluster create
command, require administrators to be logged into the cluster (using thedctl login
command).dctl -s <node name(hostname)/node IP address> # (Mandatory) the IP or DNS Short name of the node you are creating the cluster from. cluster create <cluster name> # Mandatory <node1,node2,node3…> # Mandatory --vip <virtual-IP> # Mandatory --poddns <cluster subdomain> # Mandatory --storage-vlan <vlan id> # Optional --admin-password # (Optional) will be asked to provide one if not specified in the command. --ca-cert <path to file> # Optional --tls-cert <path to file> # Optional --tls-key <path to file> # Optional --vip-mgmt <local|external> # (Optional), in case not given it will be local by default
For Example:
$ dctl -s dsv1-vm-1 cluster create dvxtb1 dsv1-vm-1,dsv1-vm-2,dsv1-vm-3 --vip 172.16.19.142 --poddns cluster.local
Note
Do not add spaces or other whitespace characters when specifying the comma-seperated list of nodes (using DNS short names).
Note
The dctl cluster create command automatically adds an administrative user named
admin
. Using the default quorum size of three nodes, the first three nodes specified in the dctl cluster create command become master nodes by default.Once the cluster is created, login to cluster.
$ dctl –s <VIP> login –u admin –p <password>
Example:
dctl -s 172.16.19.142 login Name : dvxtb1 Virtual IP : 172.16.19.142 Server : dvxtb1.eng.diamanti.com WARNING: Thumbprint : fb 65 04 29 86 55 81 7b e9 a8 ab 19 c3 79 8b 5d fd 05 70 f0 c2 64 be d3 83 6c 42 3c 2f 7c 54 8c [CN:diamanti-signer@1692017827, OU:[], O=[] issued by CN:diamanti-signer@1692017827, OU:[], O=[]] Configuration written successfully Username: admin Password: Successfully logged in
Wait for the status of the nodes to be in
Good
state before moving to the next step.$ dctl cluster status
Example:
dctl cluster status Name : dvxtb1 UUID : 112ec41d-3aa2-11ee-94b7-000c29f70c62 State : Created Version : 3.6.2 (62) Etcd State : Healthy Virtual IP : 172.16.19.142 Pod DNS Domain : cluster.local NAME NODE-STATUS K8S-STATUS ROLE MILLICORES MEMORY STORAGE SCTRLS LOCAL, REMOTE dsv1-vm-1 Good Good master* 7100/16000 25.07GiB/64GiB 0/0 0/64, 0/64 dsv1-vm-2 Good Good master 7200/16000 25.26GiB/64GiB 0/0 0/64, 0/64 dsv1-vm-3 Good Good master 7100/16000 25.07GiB/64GiB 0/0 0/64, 0/64
The cluster is up and can be accessed using the dctl CLI tool as above or through the browser at the URL: https://<Virtual IP>. The last two steps set up the storage and networking for the cluster.
Setup Storage
Diamanti Storage Stack (DSS) services are enabled on each node by adding diamanti.com/dssnode=medium label, and setting its value to one of the supported DSS storage class.
Example:
$ for host in <hostname-1> <hostname-2> <hostname-3> do dctl node label ${host} diamanti.com/dssnode=medium done
Once diamanti.com/dssnode label is added diamanti-dssapp-<storage-class> daemon-set will start the diamanti-dssapp-<storage-class> pod on each node. Verify that diamanti-dssapp-<storage-class> pod is in running state.
$ kubectl -n diamanti-system get pods | grep dssapp diamanti-dssapp-medium-g67w9 1/1 Running 2 45h diamanti-dssapp-medium-h25zj 1/1 Running 2 44h diamanti-dssapp-medium-pwfkj 1/1 Running 3 45h
Verify the storage status by using
dctl drive list
.$ dctl drive list NODE SLOT S/N DRIVESET RAW CAPACITY USABLE CAPACITY ALLOCATED FIRMWARE STATE SELF-ENCRYPTED dsv1-vm-1 0 VMWareNVME-0001 cd4d279a-aa89-44cb-9707-1b33de5bcadb 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-1 1 VMWareNVME-0002 cd4d279a-aa89-44cb-9707-1b33de5bcadb 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-1 2 VMWareNVME-0003 cd4d279a-aa89-44cb-9707-1b33de5bcadb 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-1 3 VMWareNVME-0000 cd4d279a-aa89-44cb-9707-1b33de5bcadb 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-2 0 VMWareNVME-0001 644439c9-6275-4082-b584-ffce43e81079 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-2 1 VMWareNVME-0002 644439c9-6275-4082-b584-ffce43e81079 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-2 2 VMWareNVME-0003 644439c9-6275-4082-b584-ffce43e81079 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-2 3 VMWareNVME-0000 644439c9-6275-4082-b584-ffce43e81079 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-3 0 VMWareNVME-0001 e1d2f583-9d75-432d-8d1f-c9de9ed5f566 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-3 1 VMWareNVME-0002 e1d2f583-9d75-432d-8d1f-c9de9ed5f566 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-3 2 VMWareNVME-0003 e1d2f583-9d75-432d-8d1f-c9de9ed5f566 214.75GB 186.83GB 184.16GB 1.0 Up No dsv1-vm-3 3 VMWareNVME-0000 e1d2f583-9d75-432d-8d1f-c9de9ed5f566 214.75GB 186.83GB 184.16GB 1.0 Up No
Also, verify the storage status by using
dctl cluster status
command.$ dctl cluster status Name : dvxtb1 UUID : 112ec41d-3aa2-11ee-94b7-000c29f70c62 State : Created Version : 3.6.2 (62) Etcd State : Healthy Virtual IP : 172.16.19.142 Pod DNS Domain : cluster.local NAME NODE-STATUS K8S-STATUS ROLE MILLICORES MEMORY STORAGE SCTRLS LOCAL, REMOTE dsv1-vm-1 Good Good master* 7100/16000 25.07GiB/64GiB 0/747.32GB 0/64, 0/64 dsv1-vm-2 Good Good master 7200/16000 25.26GiB/64GiB 0/747.32GB 0/64, 0/64 dsv1-vm-3 Good Good master 7100/16000 25.07GiB/64GiB 0/747.32GB 0/64, 0/64
Setup Overlay Network
It is necessary to configure a private (overlay) network before deploying applications. Diamanti CNI assigns IP addresses to the applications from this pool.
$ dctl network overlay create default -s 172.30.0.0/16 --isolate-ns=false --set-default
Note
The subnet configured using the above command is a private subnet (VxLAN encapsulated) and does not require to be routed.
Installing the Licenses
The nodes provisioned in the above steps come pre-configured with a 72 hour license. To request Trial/PoC license or to request for subscription license, send an email request to Diamanti Support. The email must include the output of /etc/machine-id
from all the nodes:
$ cat /etc/machine-id
1f3b85dd789d4f4a88c9972e8c5586a2
You will receive the license files for all the nodes in the pattern of PassiveCert - UE_1f3b85dd789d4f4a88c9972e8c5586a2.txt.
Copy the respective license files at the path /etc/diamanti/license/ on the respective node for the all nodes.
Login to the cluster:
$ dctl -s <VIP address> login -u admin -p <password>
Activate the license:
$ dctl node license activate
License activation process started for node dsv1-vm-1
License activation process started for node dsv1-vm-2
License activation process started for node dsv1-vm-3
Check the license list and status. Below example is for trial license type:
$ dctl node license list
NODE NAME LICENSE ID TYPE STATUS EXPIRATION DATE
dsv1-vm-1 719feb4c8fe640c6a15b7f739aa60b10 Trial Active 14 Nov 2023
dsv1-vm-2 65b43c560aa549a2a7a56f6d3a57742d Trial Active 14 Nov 2023
dsv1-vm-3 1f3b85dd789d4f4a88c9972e8c5586a2 Trial Active 14 Nov 2023
$ dctl node license status
Licensing check every: 12h0m0s
Licensing delay: 72h0m0s
Licensing alerts start: 720h0m0s
NAME HOST SERIAL NUMBER LICENSING STATUS CHECK INTERVAL DELAY LAST CHECK
dsv1-vm-1 719feb4c8fe640c6a15b7f739aa60b10 Active 12h0m0s 72h0m0s 1m
dsv1-vm-2 65b43c560aa549a2a7a56f6d3a57742d Active 12h0m0s 72h0m0s 1m
dsv1-vm-3 1f3b85dd789d4f4a88c9972e8c5586a2 Active 12h0m0s 72h0m0s 1m
Advanced Configuration
This section describes how to perform advanced Diamanti node configurations.
Changing the Node Password
Diamanti strongly recommends that you change the password immediately after initially logging in to a Diamanti node.
If not already logged in to the node, using a console monitor, log in to the node using the following credentials:
User: diamanti
Password: diamanti
Change the password using the following command:
passwd
For example:
$ passwd
password for diamanti:
Changing password for user diamanti
New password:
Retype new password:
Configuring the Date, Time, or Timezone
You can configure the date, time, or time zone on a Diamanti node. To change the date and time, use the following command:
sudo date -s <new-date-and-time>
To change the time zone, use the following command:
sudo timedatectl set-timezone <new-time-zone>
For example:
$ sudo timedatectl set-timezone America/Los_Angeles
You can list all available time zones using the following command:
$ timedatectl list-timezones
Manually Configuring the Management IP Configuration
You can manually configure the management port using a static IP configuration. Configure the IP before creating the cluster. Set the management port to use a static IP configuration using a console monitor connected to the node. Edit the ifcfg-ens255f1
file using the following command:
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-ens255f1
In the file, change BOOTPROTO=dhcp to BOOTPROTO=none.
Example:
BOOTPROTO=none
Also, add the following lines to the bottom of the file:
IPADDR=<ip-address>
NETMASK=<netmask-address>
GATEWAY=<gateway-address>
Example:
IPADDR=172.16.1.71
NETMASK=255.255.255.0
GATEWAY=172.16.0.1
Enable the static IP configuration.
$ sudo ifdown ens255f1
$ sudo ifup ens255f1
Manually Setting up the DNS Configuration
You can manually configure one or more DNS servers in your environment, if needed.
Check the nameserver entries that are currently configured using the following command:
$ cat /etc/resolv.conf
The resolv.conf file contains directives that specify the default search domains along with the list of IP addresses of name servers (DNS servers) available for resolution.
Edit the resolv.conf file.
sudo vi /etc/resolv.conf
Add (or append) the search domains using the following format:
search <dns-domain-1> ... <dns-domain-n>
Add one or more name servers using the following format:
nameserver <dns-server-1>... nameserver <dns-server-n>
Verify that a DNS server is configured using the following command:
; generated by /usr/sbin/dhclient-script search eng.diamanti.com. nameserver 172.16.1.2
Manually Configuring the DNS Mapping
If a DNS server is not available on the network, you need to manually update the /etc/hosts file on the Diamanti UE on VM node to provide name resolution information for all Diamanti nodes that will be in the cluster.
Do the following:
Using a console monitor that is logged in to the node, edit the /etc/hosts file using the following command:
$ sudo vi /etc/hosts
In the file, manually add the IP address, node name, and fully qualified domain name (FQDN) of all Diamanti nodes that will be in the cluster. For example:
131.10.10.1 node1.example.com node1 131.10.10.3 node2.example.com node2 131.10.10.5 node3.example.com node3
Manually Configuring the NTP Server
You can set the Network Time Protocol (NTP) servers used by the Diamanti UE on VM node.
Using a console monitor that is logged in to the node, edit the chrony.conf file.
$ sudo vi /etc/chrony.conf
Delete unneeded servers from the chrony.conf file.
Add the NTP servers to the file. Use the following format:
server <ntp-server-FQDN> <ntp-server-ip-address> iburst
Save the chrony.conf file.
Feature Matrix
This section provides a feature matrix outlining supported and unsupported features available with Diamanti UE on VM.
Feature |
Diamanti UE on VM |
---|---|
VIP-based cluster management |
Supported |
User management (LDAP/RBAC) |
Supported |
Persistent storage |
Supported |
Mirroring/Snapshots |
Supported |
NFS volumes |
Supported |
Async replication |
Supported |
Overlay CNI |
Supported |
Helm package management |
Supported |
Air-gapped cluster |
Supported |