vSphere 6.5 whats new – core services
vSphere 6.5 was announced recently on VMworld in Barcelona. I have tried too summary all the new interesting features and I think we all have something to look upwards. Some of those features are really nice (especially in vSAN) but all of them look really amazing.
Let’s start with core services. There will be another post in few days regarding general features like VUM integration in vCenter Server appliance and others enhancements.
Virtual SAN 6.5 what’s new
- Virtual SAN iSCSI Service
- 2-node Direct connect
- Licensing changes
Virtual SAN iSCSI Service
One of the big things there. Have you ever think about mapping your vSAN to the outside world? Yes, it is now supported! Great thing is that those LUNs you expose are still vSAN objects so you have the same management ability as usual. What that means? Well you can simply assign them storage policy and change performance on the fly. Settings thing up is quite straight-forward process
Enable iSCSI Target Service
Create Target and LUN
Please note that the main use-case is your MSCS or OracleClusters; you should not use that for example as a shared datastore for your KVM environment. Another limitationis 1024 LUNs per cluster and 128 targets per cluster. Limit in terms of size is 62 TB.
2-node direct connect
In the past we have seen scenarioswhen it would be useful to connect your two ROBO hosts together using 10GbE NICs to get speed for vSAN / vMotion but it was not ideal to buy 10 GbE switch just for that. Problem was that the witness which has to be connected to the 10 GbE network itself. Time has changed and now you can specify which vmkernel interface you want to use for you witness traffic. Just a small thing like that can significantly lower your RoBo TCO dramatically.
esxcli vsan network ip set -i vmk<X> -T=witness
Let’s keep that simple. All-Flash configurations are now possible with Standard edition of vSAN! Please note that data services like deduplication, compression or raid5/6 are still available only in Advanced license package but still this is huge and anybody can benefit from All-Flash without additional costs.
vSphere 6.5 – HA what’s new
vSphere HA is one of the core services in vSphere environment and basically it is responsible for ensuring high availability of your environment.
- Admission Control
- Restart Priorities and Orchestrated Restart
- Proactive HA
Two important changes there. Remember situations when you add new host to the cluster and tries to deploy new VM which result in failure because of insufficient resources in cluster? Well you probably forget to change your admission control policy and adjusted failover capacity. Now this is done automatically and all you need to do is set how many hosts can go down to keep your environment healthy.
Shortly if you had 4 hosts your failover capacity was probably 25%. Once you add another host resulting in 5 hosts total your failover capacity will be just 20% and you do not need to do anything around it.
Another setting that was not there before is Performance degradation. What it is for? Basically it utilizes DRS and can raise alarm if your workload after failure results into overcommit.
Let’s imagine following situation:
- You have 3 host each with 128 GB RAM
- 1 host failure to tolerate
- 300 GB of memory allocated to VMs
- 0% resource reduction tolerated
One host go down resulting in 256 GB ram in cluster
300 GB need but only 256 available -> Alarm will be raised to notify admin about such situation.
This specific setting describes the situations which may result in downtime. Let’s imagine situation when you have a server with dual power supply and one of them failed. The host is still up and running but now you are in single point of failure condition and another failure will result into the downtime. Same goes for example with dual NICs etc. How does vCenter knows that something wrong is with your hardware? Well it’s all about Health Providers that are supplied by the manufacture of your server.
You have two options basic options how to react on such event. You can either put host into the maintenance mode as you know or you can quarantine the host. What quarantines mean? By default, VMs will be migrated from the host but if such action will result into over commitment of your cluster, VMs will be left on this quarantine host. Same goes with Anti-affinity rules. If such action will result in breaking that rule it won’t happened and VM will be kept on affected host. Host will not be considered for placing new VMs as well.
Usually in the past nobody used such function because even when you have dozens of VMs running on host that failed they ware restarted all almost simultaneously.
But what if you need to fully starts some priority VMs before moving on? Now you can easily do that using “VM Overrides” in HA cluster settings.
Now you can group VMs into “priority” groups those groups will be started in sequence.
You can specify time to wait between each batch as wall when to actually starts with another batch. Default is “Resource allocated” which means that once the resources are allocated to VM it moves on. If you select this option and zero seconds wait between batches all of your VMs will be started almost instantly no matter priority.
Much more interesting is “Guest Heartbeats detected” or “App Heartbeats detected”. In this situation the process will wait for Guest / App heartbeat from VMware tools so you can be sure that your VM is running correctly before starting new batch
Since App heartbeats are not used that much majority of us will be using Guest heartbeat.
Another possibility there is to stick VMs into groups that will be launched in sequence (within same priority group). Let’s imagine situation when you are running some multi-tier app consists of App tier and DB tier. You can create such VM to VM rule that describe such situation.
vSphere 6.5 – DRS what’s new
Nothing especially new there just few minor improvements.
- Predictive DRS
- Network-Aware DRS
- DRS Profiles
If you are running VROps in your environment you can utilize those data for DRS actions. Let’s imagine situation when you have a VM that will periodically spike CPU utilization every day at 2:00 am. Now DRS can migrate such VM across hosts even before such spike occurs because of historical performance data in VROps.
Another minor improvement there. From vSphere 6.0 you were able to set network reservation for VM. This can now be taken into consideration of DRS itself as well as utilization of physical NICs of your host. Once network saturation of physical uplinks occurs DRS will try to resolve that situation by migrating VMs to another host with lower saturation.
Basically you can tweak DRS little bit more from UI itself without using those hidden advanced parameters. There are few areas that can be configured as described on the picture. Those parameters are pretty straight forward and speaking for themselves
vSphere 6.5 – VMFS what’s new
Automatic space reclamation
I always hated to manually run vmkfstool –y to reclaim my free space back to array. Now it is possible to automate such tasks either from UI or ESXCLI so you do not need to bother with that anymore. You can turn such feature on/off per datastore.
esxcli storage vmfs reclaim config get -l mydatastore Reclaim Granularity: 1048576 Bytes Reclaim Priority: low
esxcli storage vmfs reclaim config set -l mydatastore-p high
Using ESXCLI you can set priority even to medium or high (in UI for whatever reasons only low is accepted)
Now you can have up to 512 devices and 2000 paths in each host (previously 256 devices and 1000 paths). In some cases, that was a problem and we were hitting those maximums especially when you have 4/8 paths to device.