Thursday, July 15, 2010

I/O alignment in VMWare ESX

In Many cases, VMDK partitions can become misaligned, leading to performance degradation.So before deploying your virtual machine check these recommendation for Netapp

VMDK partitions need to be aligned at both the VMFS and guest OS levels. For example, you can align the partitions at the VMFS level by selecting the vmware LUN type when creating your LUNs. By doing so, the partitions are aligned to sector 128 or sector 0, depending on whether you use VirtualCenter or vmkfstools to create the VMFS. Regardless, the partitions will be aligned as both are multiples of 4KB, thereby fulfilling the WAFL read/write requirements.

However, because of the 63-sector offset implemented by the Windows and Linux operating systems, the partitions will still not be aligned at the guest OS level. Therefore, you must manually align the .vmdks at the guest OS level for VMFS and NFS datastores.

Cause of this problem

Partial writes occur when using VMFS filesystem or NFS based VMDKs on ESX.

This issue is not unique to NetApp storage. Any storage vendor or host platform may exhibit this problem. You can determine if partial writes are occurring by looking at the wp.partial_write setting in perfstat. wp.partial_write is a block counter of misaligned I/O. In Data ONTAP 7.2.1 and subsequent 7G versions, read/write_align_histo.XX and read/write_partial_blocks.XX are also available in the stats stop -I perfstat_lun section of a Perfstat

Aligning your partitions to a 4K boundary in both the VMDK and the LUN is a recommended bast practice (see TR3428 for this and other ESX on NetApp Best Practices).

RDMs are not affected by partial writes as long as the LUN type is set to the RDM OS type.

IF you need more details and fixing this issue using mbralign
NETAPP Knowledge Base

NETAPP Knowledge Base

Friday, July 2, 2010

Performance Tunning Your Application Server

Instruction to performance tune OS for JBOSS/Tomcat/WebSphere/Oracle AS

First, you need to set the kernel parameter for shared memory to be at least as big as you need for the amount of memory you want to set aside for the JVM to use as large page memory. Personally, I like to just set it to the maximum amount of memory in the server, so I can play with different heap sizes without having to adjust this every time. You set this by putting the following entry into /etc/sysctl.conf:

Set the kernel Parameter for Shared memory in /etc/sysctl.conf to maximum amount of memory in the system so u don’t have to worry about the heap sizes adjustment .


kernel.shmmax = n for 4 GB server use 4294967296

where n is the number of bytes.

Set a virtual memory kernel parameter of how many large memory pages you want


vm.nr_hugepages = n

where n is the number of pages, based on the page size listed in /proc/meminfo for
Hugepagesize: 2048 kB

So, I wanted to set this to 3GB. I set the parameter to 1536, which is (1024*1024*1024*3)/(1024*1024*2). Which is 3GB divided by 2MB, since 2048 KB is 2MB.

Set another virtual memory parameter, to give permission for your process to access the shared memory segment. In /etc/group, created a new group, called hugetablespace let suppose guid is 1501

Put that group id in /etc/sysctl.conf as follows:
vm.hugetlb_shm_group = 1501

This GUID should be attached to the same user as that with which JBOSS is running .

/etc/security/limits.conf as follows:
jboss soft memlock n
jboss hard memlock n

where n is equal to the number of huge pages, set in vm.nr_hugepages, times the page size from /proc/meminfo, , 1536*2048 = 3145728. This concludes the OS setup, and now we can actually configure the JVM.

The JVM parameter for the Sun JVM is -XX:+UseLargePages use the same large pages /proc/meminfo