Reducing Shared Storage IO About 0% with MCS Storage Optimization

9:16 PM
Reducing Shared Storage IO About 0% with MCS Storage Optimization -

This is part two in a series of posts about Machine Creation Services Storage Optimization (MCSIO). For those who are not familiar with MCSIO, the first blog in the series, Introduction to MCSIO Storage Optimization to get an overview of the technology and architecture of MCSIO.

see

To the question can be answered MCSIO Shared Storage IOPs reduce, configured with temporary storage and temporary disk caching on MCSIO conducted a series of tests were. Although the blog focused on this configuration, giving the results also a useful insight for those looking at using other MCSIO configurations. Through a series of tests, we investigate the effects of using this feature with RDS and VDI desktops and again show how it can help reduce IO Shared Storage.

have

Test Methodology

I decided to use LoginVSI, a tool that many are probably familiar to you to perform a series of single-server scalability tests with knowledge worker workload, users logged on at a rate of 1 every 30 seconds.

Running this test served two functions: it enables the observation of system resource usage under load as IOPS or CPU usage and the measurement of the user session information, as defined by LoginVSI. Each test was performed in catalogs by MCSIO temporary memory cache sizes increases. This allows us to characterize the effects of temporary storage to shared storage IO and temporary disk cache utilization. Temporary storage cache size was increased until temporary disk cache are unused and all data has been temporarily stored in temporary memory cache. Baseline tests were run on standard MCS for the comparison.

Results for IOPS are the sum IOPS for each test configuration shown.

results using MCSIO with RDS-workers

These tests were conducted in 2012 in Windows R2 RDS workers, according to the defined methodology.

SUM IOPS: Windows 2012r2

Tests with the standard show 256MB memory cache size that IOPS write to shared memory MCS effectively removed and swapped to temporary memory and disk cache. On the 256MB configuration, there is a reduction in the total write IOPS of 39% to the temporary disk compared to the standard MCS configuration on shared storage. The larger the configured temporary memory cache provided for the machine catalog, the greater is the reduction of IO load to the temporary cache disk. On the 4GB configuration will all temporary data is stored in intermediate memory cache and the temporary cache disk is not used. This results in a total reduction of IOPS of ~ 93% over the standard shared memory MCS.

The 256MB test the redirected sum read / write IOPS are 7% lower than the MCS base IOPS read / write. Why is that important? The diverted IO shows two things: ..

  • , the temporary cache is how much IO load placed at IO reduction on shared memory
  • on temporary disk cache How effective

This modest reduction indicates that the test workload, temporary memory cache is rapidly consumed. Therefore most of the IOPS is previously executed on standard MCS with shared storage will now be directed to the temporary plate.

For more temporary memory cache = less r / w IOPS

To sum up, when using a small amount of temporary cache, you are effectively most of your desktops' IO on your temporary disk cache. The importance of additional memory shown in additional tests performed. Doubling the available temporary memory cache of 256 MB to 512 MB reduces total read / write IOPS by 50%, 1 GB to 77% and 4 GB We only use temporary memory cache and not read / to temporarily cache disk to write.

This result underscores something of importance, which should always be taken into consideration if your environment to configure: understanding the balance between temporary memory and disk cache utilization. This will affect decisions about what size, type and capacity of the memory is used for temporary disk caching example local hypervisor disks, NAS, SAN, and so on.

It is important to ensure that the temporary disk cache is able to handle the IO traffic to it is addressed, otherwise it could in temporary storage give a more congestion on the performance and usability of system. The results show that you are able to effectively use your temporary cache disk manipulation requirements memory cache as a possibility. The larger the temporary memory cache, the lower the temporary storage plate and IO requests.

Another way to demonstrate this is installed by default by the perfmon counter on your RDS or VDI workers. They are a great way to understanding in the meeting temporary cache usage and in turn with sizing your environment be useful

There are 28 points total. But the most useful show the use and size of the temporary cache and IO actions of the OS.

  • \ computername Citrix MCS Storage Driver Cache Disk used
  • \ computer Citrix MCS Storage Driver Cache disk size
  • \ computername Citrix MCS storage Driver cache used
  • \ computername Citrix MCS storage Driver cache target size
  • \ computername Citrix MCS storage Driver system Disk write bytes
  • \ computername Citrix MCS Storage Driver system Disk bytes read
  • \ computername Citrix MCS Storage Driver cache written Disk bytes
  • \ computername Citrix MCS Storage Driver cache Disk bytes read

Using the "cache disk size" metrics, we are able to see the relationship between the amount of temporary memory cache on the deployed machine assigned and how the size of the temporary cache disk affects the cache disk size.

MCSIO temporary disk cache growth rate 2012r2

Since the buffer cache size is increased, we can see later in the test used the temporary disk cache, when the memory cache overflows. This leads to smaller temporary disk cache sizes. The growth from the temporary cache disk size level during the tests; This is the expected behavior. Once MCSIO has saved enough from the operating system and user data in the cache, it is capable of memory and disk cache leads to a slowing of the growth plate temporary re-use and re-write.

results using MCSIO on VDI workers

This set of tests were performed under Windows 10 x86 VDI workers, according to the defined test methodology.

MCSIO SUM IOPS Windows 10

For VDI desktops, a lesser amount of temporary memory cache makes a remarkable difference in behavior. Using the default memory cache size of 256 MB is the write IOPS Standard MCS bearing noise reduced with a total reduction of 99%. We also see IOPS from a total of 1,339 IOPS can handle on standard MCS memory go to 20 read / write IOPS to cache temporary disk space.

MCSIO temp mem cache usage win10

No special mention other system properties was through this blog, there is nothing remarkable features compared to the MCS made baseline tests. Watching other host metrics for both the RDS and VDI tests did not show any significant change in CPU utilization. LoginVSI baseline values ​​indicate similar user experience through all tests regardless of temporary cache size.

Summary

  • using RDS workers, dimensioning of the environment is really important if you want to reduce the IO load on temporary disk space. They require more memory in-guest to obtain the same level of benefits as seen in the VDI tests. The reason for this is that the RDS-workers is different resource consumption to that of a VDI workers how to use shared resources and multiple sessions running on each machine rather than a one-user-to-machine mapping.
  • you as a way of reducing your temporary cache disk space requirements are to be able to use memory cache. The larger the temporary memory cache, the lower the temporary storage plate power, space, and IO requests.

This is the second in a series of blogs. Expect to see a third in this series, the focus on recommendations and findings from the study of this function.

Citrix Mobilize Windows Banner 1_728x0-061715

Previous
Next Post »
0 Komentar