The RapidVPS Kernel
All of the RapidVPS Servers operate using the 2.6 series Linux Kernel with the OpenVZ virtualization container framework. Currently, our stable release is based on 2.6.18. We maintain two in-house Kernel trees- devel and stable. Unlike other VPS hosts which simply use the pre-build kernel image from a vendor and hope for the best, our team has two kernel developers which actively follow the LKML and openvz-devel lists to stay on top of features, bugs, and architecture changes. We use our stable branch on production VPS and Dedicated servers and the devel branch on our half dozen internally used machines.
Some of the high-level areas in which our kernels are modified and tuned to better support our hardware and clients are:
- Per VE Disk I/O Statitistics
Disk i/o bandwidth is a very crucual and finite system resource. Our ve-io patch generates per VE and per process statistics regarding the disk i/o consumption, rate, and frequency. Though this, our system can reactively respond to saturated disk i/o channels and have an intelligent and accurate conclusion as to what application/VE/process is causing the problem. Other hosts can guess and approximate using high level block and partition level tools such as iostat or try and guess disk consumption based on cputime from a tool like top, while RapidVPS can pinpoint the disk i/o hogging application and VE in seconds.
To help illustrate why and how this is useful, here are some rather interesting graphs we saved from real customer environments. The interested and experienced admin should be able to identify some problem on each. When a problem arises on a node where the disk i/o bandwidth is saturated, it causes slowdowns for ALL customers who want to access the disk. Using the data graphed below, but processed in realtime, our system is able to pinpoint exactly what is saturating the disk i/o, then we make the call as to reprioritize, kill, or disable the problem application. These numbers are NOT per physical server, per partition, or per block device. These are Per VPS; this is very important.
- Guaranteed CPU Resources
Other hosts employe "fair-share cpu resources". This is a huge mistake from our experience (we started out like this as the 100mhz package scares people away). After quickly learning that CPU resources should not be faily guaranteed to ensure that your higher paying customers can get a larger share, we decided to code a cputime/MHz equivalentcy algorithym into our system, so that the kernel can faily enforce CPU guarantees during >90% cpu saturation levels. This takes the guessing game out of hosting; we learned that people would rather know exactly how much they are paying for and how much they are using. Think of the insane resources of many shared hosting options- 5TB bandwidth, 3TB disk space. What does this really guarantee a web application when CPU resources are not guaranteed. Wouldn't you rather know- "I am at 80% utilization of my overall hosting package". RapidVPS allows you to do this because we deliver resource based reports of your environment via rrd graphs and text output. You can know exactly how much cpu you have used in the last week, as well as many other resources.
- Per VE Network Statistics
Most OpenVZ hosts do not provide a method in tracking your bandwidth. We employ a custom nefilter kernel module which exports network statistics per VE to userland. These numbers are then accounted and stored using an RRD database, and can be graphed and presented to our staff and the client. The end result is strong intelligence over each client's bandwidth usage, a critical factor in operating a stable and profitable network.
- Virtualized meminfo support
Most VZ/OpenVZ hosts confuse their clients when user tools such as "free" and control panels such as WHM incorrectly interpret the state of the VE's system memory. Further, many applications use /proc/meminfo to adjust and throttle their resource usage. Vanilla VZ/OpenVZ kernels export the harware node's memory statistics to the VE environment- RapidVPS avoids this problematic situation with a kernel modification. The end result is each VE has an accurate and virtualized /proc/meminfo report.