Tuning Linux virtual machines on Azure

Tuning Linux virtual machines on Azure

Modifying ring buffer and a few other parameters with UDEV rules

We all have been there, trying to get the most out of our computers, servers and so on. Linux offers its users lots of great kernel options that, when used correctly can really help improving not only performance, but also reliability as well as introducing expected behavior of certain workloads.

When running Linux on Azure, there are a few things that we can tune/change to improve workloads and overall system stability , responsiveness as well as resource utilization.We can start by increasing ring_buffer sizes for network adapters, synthetic as well as accelerated adapters (Mellanox) when using Azure Accelerated Networking.

The easiest way to make this change and also make sure it works after reboots, redeploys and so on, is to create a UDEV rule file under /etc/udev/rules.d with these contents:

# This is the UDEV file that will persist a few settings across network adapters
# You can drop this file under /etc/udev/rules.d
# You can reboot the VM after having the file placed in the folder or you can also trigger immediate changes by using:
# sudo udevadm control --reload-rules && udevadm trigger

# Setup Synthetic (virtual) interface
SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add",  RUN+="/usr/sbin/ethtool -G $env{INTERFACE} rx 8192 tx 8192"
SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add",  RUN+="/usr/sbin/ethtool -N $env{INTERFACE} rx-flow-hash udp4 sd"


# Setup AN interface 
SUBSYSTEM=="net", DRIVERS=="hv_netvsc*|mlx*", ACTION=="add",  RUN+="/usr/sbin/ethtool -G $env{INTERFACE} rx 18000 tx 2500"

# Adds NOQUEUE to VF
SUBSYSTEM=="net", DRIVERS=="hv_pci|mlx*", ACTION=="add", RUN+="/usr/sbin/tc qdisc replace dev $env{INTERFACE} root noqueue"

I have a UDEV file available in my GitHub location if needed.

With the settings above, we are basically setting:

  • RX/TX ring buffers for the AN interface to 8192 each
  • RX/TX ring buffers for the Synthetic interface to 18000 (RX) and 2500 (TX)

With these modifications, we should drastically reduce retransmissions.


Tuning kernel parameters using sysctl.conf

There are a lot of parameters available in the linux Kernel that can be tuned using sysctl.

In most cases, the recommendation is to create separate files under /etc/sysctl.d for cleaner and better control but you can also just use the provided /etc/sysctl.conf to add your changes.Before any changes are in place.

It is always recommended to backup the current values using:

$ sudo sysctl -a > /etc/sysctl.conf.original

Another way to reset values is to simply remove the changes either from /etc/sysctl.conf or move the drop-in file from /etc/sysctl.d away and reboot the server.

Backing up the file would allow you to just reset the values without the need of a reboot. I would personally recommend backing up the parameters you are trying to use as well since a few of them are dynamic so that way you are only restoring the ones being altered.

Example and commented sysctl.conf

I am keeping an updated copy of an example of a tuned sysctl.conf in my own GitHub location.

If we apply the buffer changes above and the base sysctl configuration, we should be able to easily achieve the maximum bandwidth for a given virtual machine size specification and reduce the number of retransmissions to virtually none in most cases.