Dell Edge Server Install Guide
The purpose of this guide is to help you install and configure the software in the Dell Edge Server Bare Metal (also referred as DESBM) environment. It begins by listing the recommended minimum Dell Edge Server Bare Metal requirements and then walks through the steps to bring up the Kubernetes cluster.
This document states the installation steps for PowerEdge XR4000z (Dell Edge Small box, for PowerEdge XR4000r system (big box). The installation steps are the same as Bare Metal Install Guide.
Overview
The PowerEdge XR4000r system is a 2U rackmount chassis that supports:
Up to four XR4510c 1U single-width compute sleds or up to two XR4520c 2U single-width compute sled or up to two XR4510c 1U, single-width and one XR4520c 2U, single-width compute sleds and an optional nano-server (witness sledXR4000w) example vSAN.
Up to two redundant AC or DC power supply units
For more information, see Installation and Service Manual for PowerEdge XR4000r.
The PowerEdge XR4000z system is a 2U stackable chassis that supports:
Up to two XR4510c 1U, 1-socket server sled or one XR4520c 2U, 1-socket server sled and one optional nano-server (witness sled XR4000w) example vSAN.
Two redundant AC or DC power supply units.
For more information, see Installation and Service Manual for PowerEdge XR4000z.
Minimum BM Requirements
Diamanti UE on DESBM has the following minimum DESBM requirements:
Resource |
Minimum |
---|---|
Memory |
64 GB |
CPU |
32 cores |
Boot Drive |
480 GB |
Application Drive |
4 x NVMe SSDs (minimum 200GB per SSD) |
Networking |
1 NIC |
Installation Checklist
Download the following rpms.
diamanti-cx rpm
diamanti-cx-docker-core
Please contact Diamanti Support at support@diamanti.com to copy or download the package.
Following configuration parameters are needed for the cluster.
1 IP address for the cluster VIP
The steps listed below will be followed to provision a cluster of two Dell Edge Bare Metals and 1 Dell Edge software control plane node. Both Dell Edge Bare Metal nodes will serve as control-plane and worker nodes.
One IP address per Dell Edge Bare Metal for SSH/Clustering.
The DNS resolvable short name of the cluster nodes (
hostname
)The DNS sub-domain for the cluster (POD DNS, eg: cluster.local)
Zone names (optional If the nodes are going to be in different zones)
Following parameters are optional and can be skipped for PoC environments
Generate Cluster certificates. In the absence of self-signed certificates, internal certificates will be generated.
ca-cert : Certificate Authority
server-cert : Cluster public key with the following SAN info - IP addresses : cluster virtual IP address, 10.0.0.1 (kubernetes clusterIP) - DNS: cluster short name, cluster name FQDN, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local
server-key : Cluster private key
External Load balancer Configuration if the VIP is managed by an external load-balancer - IP address : Virtual IP address - Ports :
443 : Load-balanced to Quorum nodes port 7443
6443 : Load-balanced to Quorum nodes port 6443
12346 : Load-balanced to Quorum nodes port 12346
Installation Steps
There are 3 steps involved in getting the cluster up and running.
-
This includes provisioning of Dell Edge Bare Metals, hostname, management/data interface and setting up the software
-
This includes creating the Kubernetes cluster using the Dell Edge Bare Metals provisioned in the above step, setting up storage and networking (Diamanti CSI & CNI).
-
By default, the cluster license expires in 72 hours when fully functional. To extend the subscription or trial beyond 72 hours, you must request licenses and install them.
Provisioning the Nodes
The following section provides detailed instructions for configuring a Diamanti node. You will need to repeat these steps for every node you wish to provision for the cluster. For example, if you plan on provisioning three nodes, you need to repeat these steps three times.
Note
Diamanti expects DHCP reservations to use the MAC address associated with the management interface when you use DHCP-based network configurations.
Note
Kubernetes management traffic and pod data traffic are handled by the management interface by default. Alternatively, you can keep Kubernetes control traffic separate from pod data traffic on a node with an additional adapter/port.
Note
When configuring the Diamanti node, the following IP addresses are used internally and are not available for network configuration:
Setup the node by following these steps:
The following credentials will be necessary to login to the node via a console terminal:
User: diamanti Password: diamanti .. note:: We strongly recommend that you change your password right after logging in. For details about completing this process, see `Changing the Node Password`_ in `Advanced Configuration`_ section.
Verify the node name using the following command:
$ hostname
If the node name is diamanti, you need to manually configure the node name. You can change the hostname using the following command:
$ sudo hostnamectl set-hostname <hostname>
Note
The hostname should not be a fully qualified domain name (FQDN)
Run the following command to setup the software release:
$ sudo rpm -ivh <diamanti-cx*-rpm-name> $ sudo rpm -ivh <docker-core-rpm-name> $ sudo reboot
Verify the date, time, and timezone on the node using the following command:
$ date
To change the date, time, or timezone, see Configuring the Date, Time, or Timezone in Advanced Configuration section.
Verify that the management IP address is configured on the interface and is reachable. If there are more than one interfaces, either create bonding of the interfaces or disable rest of the interfaces.
The output below shows an example. The link must be up, and the IP configuration must be present.
$ ifconfig eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.6.226 netmask 255.255.224.0 broadcast 172.16.31.255 ether c8:4b:d6:91:e0:71 txqueuelen 1000 (Ethernet) RX packets 509448223 bytes 604400519898 (562.8 GiB) RX errors 0 dropped 5019 overruns 0 frame 0 TX packets 509907859 bytes 616205690206 (573.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno2: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether c8:4b:d6:91:e0:72 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- Perform the following steps to create bonding:
Run the following script to create a bonding.
sudo diamanti-bonding.sh enable dhcp link
Run the following command to reboot the node.
sudo reboot
Or
- To disable the second interface on the machine:
Run the following command to make changes in the interface configuration file.
sudo vi /etc/sysconfig/network-scripts/ifcfg-eno2
Change the variable bootproto to none and onboot to no
Run the following command to reboot the node.
sudo reboot
If the IP configuration is not present, you need to manually complete this configuration. For more information, see Manually Configuring the Management IP Configuration in Advanced Configuration section. After completing this procedure, attempt to verify the management IP address configuration again.
Verify that the management IP address is reachable using the following command:
ping -c 4 <management-ip-address> For example: $ ping -c 2 172.16.6.226 PING 172.16.6.226 (172.16.6.226) 56(84) bytes of data. 64 bytes from 172.16.6.226: icmp_seq=1 ttl=64 time=0.028 ms 64 bytes from 172.16.6.226: icmp_seq=2 ttl=64 time=0.017 ms --- 172.16.6.226 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.017/0.022/0.028/0.007 ms
Before continuing, make sure that the management IP address is configured and accessible.
Determine the gateway IP address using the following command:
ip route list match 255.255.255.255 default via 172.16.0.1 dev eno1 proto dhcp metric 100
Ping the default gateway
ping -c 4 <gateway-ip-address>
For example:
$ ping -c 4 131.10.10.1 PING 131.10.10.1 (131.10.10.1) 56(84) bytes of data.64 bytes from 131.10.10.1: icmp_seq=1 ttl=64 time=0.058 ms64 bytes from 131.10.10.1: icmp_seq=2 ttl=64 time=0.046 ms64 bytes from 131.10.10.1: icmp_seq=3 ttl=64 time=0.050 ms64 bytes from 131.10.10.1: icmp_seq=4 ttl=64 time=0.059 ms --- 131.10.10.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2999msrtt min/avg/max/mdev = 0.046/0.053/0.059/0.007 ms
Verify that a Domain Name System (DNS) server is configured and reachable.
The output shown below is an example.
$ cat /etc/resolv.conf # Generated by NetworkManager search eng.diamanti.com nameserver 172.16.1.2
Note
The resolv.conf file should contain directives that specify the default search domains along with the list of IP addresses of nameservers (DNS servers) available for resolution.
If DNS servers are not configured, you can choose to manually configure one or more DNS servers. For details about completing this process, see Manually Setting up the DNS Configuration in Advanced Configuration section.
Verify that the nameserver is reachable using the following command:
nslookup $HOSTNAME
Example
$ nslookup appserv87 Server: 172.16.1.2 Address: 172.16.1.2#53 Name: appserv87.eng.diamanti.com Address: 172.16.6.187
Take the necessary steps to ensure that the nameserver is reachable before you continue. If the node’s hostname is not resolvable, there are two options. The first option is to fix the A record for the host in the DNS server. The second option, which should only be used when no DNS servers are available is to add all the nodes as entries in the /etc/hosts file.
Reboot the node using the following command
$ sudo reboot Repeat this procedure for each node. Now, the nodes are ready to join the cluster.
Creating the Cluster
Next, we will create a cluster of Diamanti nodes. When a cluster is formed, the Diamanti software pools resources across all cluster nodes, enabling Kubernetes to schedule containers efficiently.
To create a cluster with a software node, we need to change its node type to SoftwareControlPlane
in the config file before proceeding.
sudo vi /etc/diamanti/node_properties.conf
For example:
$ cat /etc/diamanti/node_properties.conf { "management-interface-driver-name": "", "management-network-interface": "", "data-network-interface": "", "node-type": "SoftwareControlPlane", "virtual-node": { "block-device": [ ] } }Note
Run the following commands using SSH on a cluster node. Format drives are only applicable to Dell Edge servers, not software control planes.
Note
SSH to one of the nodes, that will be part of the cluster to run the below commands.
Format the drives from all nodes, if same nodes are being used for creating the cluster without OS installation .. Note:: Skip this step when creating the cluster for the first time.
By formatting drives, the data on which PVs will be stored will be erased.
Example:
$ sudo format-dss-node-drives.sh -n <nodename> $ sudo reboot
Creating the Cluster
The
dctl cluster create
command is used to create the cluster. Creation commands must use flags with the Mandatory tag, but not those with the Optional tag.Note
All commands, except the
dctl cluster create
command, require administrators to be logged into the cluster (using thedctl login
command).$ dctl -s <node name(hostname)/node IP address> # (Mandatory) the IP or DNS Short name of the node you are creating the cluster from. cluster create <cluster name> # Mandatory <node1,node2,node3…> # Mandatory --vip <virtual-IP> # Mandatory --poddns <cluster subdomain> # Mandatory --storage-vlan <vlan id> # Optional --admin-password # (Optional) will be asked to provide one if not specified in the command. --ca-cert <path to file> # Optional --tls-cert <path to file> # Optional --tls-key <path to file> # Optional --multizone value Enable/Disable multizone support --vip-mgmt <local|external> # (Optional) in case not given it will be local by default --masters value, -m value Comma separated list of master/etcd nodes in the cluster. .. note:: The ``dctl cluster create`` command automatically adds a user named admin when you create a cluster. Software Control plane nodes must be included in the master list provided by the -–masters option. You must specify three nodes in the -masters option since the quorum is set to three
For Example:
$ dctl -s edgeserv1 cluster create desbm-edge edge-etcd-1,edgeserv1,edgeserv2 --vip 172.16.19.77 --poddns cluster.local --masters edge-etcd1,edgeserv1,edgeserv2
Note
Do not add spaces or other whitespace characters when specifying the comma-seperated list of nodes (using DNS short names).
Note
The dctl cluster create command automatically adds an administrative user named
admin
. Using the default quorum size of three nodes, the first three nodes specified in the dctl cluster create command become master nodes by default.Once the cluster is created, login to cluster.
$ dctl –s <VIP> login –u admin
Note
Administrators will be prompted to enter their passwords.
$ dctl -s 172.16.19.77 login -u admin Name : desbm-tb1 Virtual IP : 172.16.19.77 Server : desbm-tb1.eng.diamanti.com WARNING: Thumbprint : 8a c2 1b 68 18 ac cb 96 b6 c6 33 b5 a7 92 c7 a1 7b 3e 8e 5d 86 90 d6 1d 4a 5d 11 24 2a 36 7a d8 [CN:diamanti-signer@1691599945, OU:[], O=[] issued by CN:diamanti-signer@1691599945, OU:[], O=[]] Configuration written successfully Password: Successfully logged inWait for the status of the nodes to be in
Good
state before moving to the next step.$ dctl cluster statusExample:
$ dctl cluster status Name : desbm-tb1 UUID : 1c4a9b36-36d5-11ee-a45c-000c2918df1e State : Created Version : 3.6.2 (62) Etcd State : Healthy Virtual IP : 172.16.19.77 Pod DNS Domain : cluster.local NAME NODE-STATUS K8S-STATUS ROLE MILLICORES MEMORY STORAGE SCTRLS LOCAL, REMOTE edge-etcd-1 Good Good master*# 0/4000 0/12GiB 0/0 0/0, 0/0 edgeserv1 Good Good master 100/32000 1.07GiB/128GiB 0/0 0/64, 0/64 edgeserv2 Good Good master 100/32000 1.26GiB/128GiB 0/0 0/64, 0/64Note
Verify the software control node displays 0 for storage, local controller, and remote controller, and a # for role.
The cluster is up and can be accessed using the dctl CLI tool as above or through the browser at the URL: https://<Virtual IP>. The last two steps set up the storage and networking for the cluster.
Setup Storage
Note
Specifically, this has to be done for DESBM only and not for software control planes.
Diamanti Storage Stack (DSS) services services are enabled on each node by adding “diamanti.com/dssnode=medium” label, and setting its value to one of the supported DSS storage class.
Example:
$ for host in <hostname-1> <hostname-2> do dctl node label ${host} diamanti.com/dssnode=medium done
Once
diamanti.com/dssnod
label is addeddiamanti-dssapp-<storage-class>
daemon-set will start thediamanti-dssapp-<storage-class>
pod on each node. Verify thatdiamanti-dssapp-<storage-class>
pod is in running state.$ kubectl -n diamanti-system get pods | grep dssapp diamanti-dssapp-medium-99s4j 1/1 Running 2 26h diamanti-dssapp-medium-fdj24 1/1 Running 0 26h
Verify the storage status using
dctl drive list
.$ dctl drive list NODE SLOT S/N DRIVESET RAW CAPACITY USABLE CAPACITY ALLOCATED FIRMWARE STATE SELF-ENCRYPTED edgeserv1 6 FJBAN6747I020AR2J 126185e7-b6a8-4e11-997d-99cd6b51158d 1.92TB 1.86TB 142.31GB 1.0.0 Up No edgeserv1 7 FJBAN6747I020AR2Y 126185e7-b6a8-4e11-997d-99cd6b51158d 1.92TB 1.86TB 142.31GB 1.0.0 Up No edgeserv1 8 FJBAN6747I020AR2D 126185e7-b6a8-4e11-997d-99cd6b51158d 1.92TB 1.86TB 142.31GB 1.0.0 Up No edgeserv1 9 FJBAN6747I020AR2F 126185e7-b6a8-4e11-997d-99cd6b51158d 1.92TB 1.86TB 142.31GB 1.0.0 Up No edgeserv2 6 FJBAN6747I020AR3C 66b564fb-143e-4bfb-b195-e38c0d4f4cbb 1.92TB 1.86TB 142.31GB 1.0.0 Up No edgeserv2 7 FJBAN6747I020AR2W 66b564fb-143e-4bfb-b195-e38c0d4f4cbb 1.92TB 1.86TB 142.31GB 1.0.0 Up No edgeserv2 8 FJBAN6747I020AR2M 66b564fb-143e-4bfb-b195-e38c0d4f4cbb 1.92TB 1.86TB 142.31GB 1.0.0 Up No edgeserv2 9 FJBAN6747I020AR38 66b564fb-143e-4bfb-b195-e38c0d4f4cbb 1.92TB 1.86TB 142.31GB 1.0.0 Up No
Also, verify the storage status using
dctl cluster status
command.$ dcs Name : dell-edge UUID : 1c4a9b36-36d5-11ee-a45c-000c2918df1e State : Created Version : 3.6.2 (62) Etcd State : Healthy Virtual IP : 172.16.19.77 Pod DNS Domain : cluster.local NAME NODE-STATUS K8S-STATUS ROLE MILLICORES MEMORY STORAGE SCTRLS LOCAL, REMOTE edge-etcd-1 Good Good master*# 100/4000 1.07GiB/12GiB 0/0 0/0, 0/0 edgeserv1 Good Good master 7100/32000 25.07GiB/128GiB 0/7.44TB 0/64, 0/64 edgeserv2 Good Good master 7200/32000 25.26GiB/128GiB 0/7.44TB 0/64, 0/64
Setup Overlay Network
Before applications are deployed, you must configure a private (overlay) network. Diamanti CNI assigns IP addresses to the applications from this pool.
$ dctl network overlay create default -s 172.30.0.0/16 --isolate-ns=false --set-default The subnet configured using the above command is a private subnet (VxLAN encapsulated) and does not require to be routed.
Installing the Licenses
Note
Software control plane node does not need any licenses.
Nodes provisioned in these steps come with a 72-hour license pre-configured. To request Trial/PoC license or to request for subscription license, send an email request to Diamanti Support. The email must include the output of /etc/machine-id
from all the nodes:
$ cat /etc/machine-id
a5c52cfa5445404ab9e48ad823b57945
You will receive the license files for all the nodes in the pattern of PassiveCert - UE_a5c52cfa5445404ab9e48ad823b57945.txt.
Copy the respective license files at the path /etc/diamanti/license/ on the respective node for the all nodes.
Login to the cluster:
$ dctl -s <VIP address> login -u admin -p <password>
Activate the license:
$ dctl node license activate
License activation process started for node edgeserv1
License activation process started for node edgeserv2
Check the license list and status. Below example is for trial license type:
$ dctl node license list
NODE NAME LICENSE ID TYPE STATUS EXPIRATION DATE
edgeserv1 a5c52cfa5445404ab9e48ad823b57945 Trial Active 30 Oct 2023
edgeserv2 5725841532fc42f6b3517258403013a8 Trial Active 30 Oct 2023
$ dctl node license status
Licensing check every: 12h0m0s
Licensing delay: 72h0m0s
Licensing alerts start: 720h0m0s
NAME HOST SERIAL NUMBER LICENSING STATUS CHECK INTERVAL DELAY LAST CHECK
edgeserv1 a5c52cfa5445404ab9e48ad823b57945 Active 12h0m0s 72h0m0s 20m
edgeserv2 5725841532fc42f6b3517258403013a8 Active 12h0m0s 72h0m0s 20m
Advanced Configuration
This section describes how to perform advanced Diamanti node configurations.
Changing the Node Password
Diamanti strongly recommends that you change the password immediately after initially logging in to a Diamanti node.
If not already logged in to the node, using a console monitor, log in to the node using the following credentials:
User: diamanti
Password: diamanti
Change the password using the following command:
passwd
Example:
$ passwd
password for diamanti:
Changing password for user diamanti
New password:
Retype new password:
Configuring the Date, Time, or Timezone
You can configure the date, time, or time zone on a Diamanti node. To change the date and time, use the following command:
sudo date -s <new-date-and-time>
To change the time zone, use the following command:
sudo timedatectl set-timezone <new-time-zone>
Example:
$ sudo timedatectl set-timezone America/Los_Angeles
You can list all available time zones using the following command:
$ timedatectl list-timezones
Manually Configuring the Management IP Configuration
You can manually configure the management port using a static IP configuration. IP should be configured before creating the cluster. Using a console monitor that is logged in to the node, set the management port to use a static IP configuration. Edit the ifcfg-eno1
file using the following command:
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eno1
In the file, change BOOTPROTO=dhcp to BOOTPROTO=none.
BOOTPROTO=none
Also, add the following lines to the bottom of the file:
IPADDR=<ip-address>
NETMASK=<netmask-address>
GATEWAY=<gateway-address>
Example:
IPADDR= 172.16.6.226
NETMASK=255.255.255.0
GATEWAY= 172.16.0.1
Enable the static IP configuration.
$ sudo ifdown eno1
$ sudo ifup eno1
Manually Setting up the DNS Configuration
You can manually configure one or more DNS servers in your environment, if needed.
Check the nameserver entries that are currently configured using the following command:
$ cat /etc/resolv.conf
There are directives in resolv.conf that specify the default search domains as well as the IP addresses of name servers (DNS servers) that are available for use.
Edit the resolv.conf file.
sudo vi /etc/resolv.conf
Add (or append) the search domains using the following format:
search <dns-domain-1> ... <dns-domain-n>
Add one or more name servers using the following format:
nameserver <dns-server-1> . . . nameserver <dns-server-n>
Verify that a DNS server is configured using the following command:
$ cat /etc/resolv.conf # Generated by NetworkManager search bos.diamanti.com nameserver 172.16.1.2
Manually Configuring the DNS Mapping
If a DNS server is not available on the network, you need to manually update the /etc/hosts file on the Diamanti UE on BM node to provide name resolution information for all Diamanti nodes that will be in the cluster.
Do the following:
Using a console monitor that is logged in to the node, edit the /etc/hosts file using the following command:
$ sudo vi /etc/hosts
- In the file, manually add the IP address, node name, and fully qualified domain name (FQDN) of all Diamanti nodes that will be in the cluster.
For example:
172.16.6.226 edgeserv1.example.com edgeserve1 172.16.230.82 edgeserve2.example.com edgeserv2 172.16.230.84 dssserv12.example.com dssserv12
Manually Configuring the NTP Server
You can set the Network Time Protocol (NTP) servers used by the Diamanti UE on BM node.
Using a console monitor that is logged in to the node, edit the chrony.conf file.
$ sudo vi /etc/chrony.conf
Delete unneeded servers from the chrony.conf file.
Add the NTP servers to the file. Use the following format:
server <ntp-server-FQDN> <ntp-server-ip-address> iburst
Save the chrony.conf file.
Feature Matrix
This section provides a feature matrix outlining supported and unsupported features available with Diamanti UE on BM.
Feature |
Diamanti UE on BM |
---|---|
VIP-based cluster management |
Supported |
User management (LDAP/RBAC) |
Supported |
Persistent storage |
Supported |
Mirroring/Snapshots |
Supported |
NFS volumes |
Supported |
Async replication |
Supported |
Overlay CNI |
Supported |
Helm package management |
Supported |
Air-gapped cluster |
Supported |