"New" Citrix Best Practices 2.0

8:52 PM
"New" Citrix Best Practices 2.0 -

It's been a few years since I first "New Citrix Best Practices" published articles, so I wanted another article for posting a couple reasons.

The first is quite obvious that things quickly change in this industry - what we considered leading practices in the past year could not be more. Also I see from 2014 back to this article and a little on some of the things I have written laugh.

The second reason is that "Article 1.0" was one of the favorite pieces that I've ever written, so the content has to some people out there have valuable echoed or proved. And it was also one of the most commented articles on Citrix Community / blogs with 93 total comments and counting. So, I feel like it's a great time to update the list and continues to provide some of our bad habits and old ways of thinking into question.

Now we want some myths bust ...

Common myths and "New" Best Practices (v2.0)

  • PVS port and Subjects , I still see so many people (including our own CCS team, so we owe, even using non-optimal settings in relation to the PVS ports and threads!). If you have not already done so, Read article my colleague on updated guidelines for ports and threads, as soon as possible; Bookmark it and begin these leading practices in each PVS deployment using for the future.
  • XenApp CPU oversubscription ratio and COD. I still see so many XenApp administrators unwilling over-commit all the cores, let the "1.5x" implement relationship alone that I have preached in the past 5 years. But, as I said in the last year or so now it's time to "2.0x" to take a critical look at the implementation of XenApp / RDS workloads.Why? Hardware is better hypervisor dispatchers are better, people are lazier than ever, and the list goes on and on. So, I actually was recommended (and implementation) to a 2.0x CPU via subscription ratio on a variety of projects lately. If you put it to the test with LoginVSI or real workloads, I bet you will find it is the optimum sweet spot in terms of SSS or more user density often not so. And, somehow this discussion scalability XenApp connection, not be afraid to allow clusters to the (COD), if you have newer boxes with Intel HCC Haswell-EP + chips.

    Because these Windows-based XenApp / RDS workloads that we provide on this hypervisor highly optimized NUMA or "aware", you can add an additional 5-10% density squeeze out of your boxes, simply by pressing the standard exchange snooping method in the BIOS. If these concepts of CPU oversubscription or cash to a stranger you are, I would recommend the XenApp scalability I read articles published in the last year.

  • protocols and codecs. This is another simple thing to do, but I still do not see a lot of customers to do it. I was in London a few weeks ago in all the technical details on this subject presented (check out my BriForum London session for all the gory details). What all this boils down to really is: If you have a "modern" MSFT operating system as part of the Citrix deployment (ie Win10 2012 R2, etc.) provide, then I recommend the default graphics codec ThinWire switching H.264 Thin Wire not H.264 (also known as Thin Wire plus) need or benefit from H.264 and departure, which is enabled to reduce your SSS .Most applications and use cases, as this is a CPU hog and 99.9% of XenApp / XenDesktop workloads these days are CPU-bound. And on the flip side, if you are a "Legacy" MSFT OS are providing (ie Win7, 08 R2, etc.), then recommend I stick with the proven legacy ThinWire implementation. Legacy Thin Wire is optimized for those older operating systems that rely on GDI / GDI +.

    If you do not know how to change these graphics encoder, then the built-in policy templates are your best friend. I should also note that, as we do not change from one week ago when we shipped 7.9 the default codec ThinWire H.264 ThinWire H.264. I personally think this is a big step.

  • farm / site / zone design. I have written a lot about Multiple Farms and pages and if you have read my stuff, you know, I / sites'm a big fan of multiple farms and basically uses a pod architecture elasticity and minimize failure domains to increase. Because it is not a question of "if" is you're going to go, it is a question of "when". But that's a must I address head-on, because there are some really bad leadership is circulating out there in the blogosphere this topic.Yes, the FMA architecture brings pieces of glorious localhost cache, but also with the 7.9- release, it is not there yet. We still have confidence in a central SQL database and primary zone. And I've seen people writing about the LHC say back or everyone - let me be clear, it is not. We have connection lease, which made its debut a few releases back and we introduced some with multiple locations or Zone concepts in 7.7. But if you read the fine print in WilliamC awesome article (or even test), this FMA-based are "zones" are not like the old IMA-based zones.

    What I mean is, if you do only a few thousand VDAS, then under 10ms links need to have! For this reason alone, we have this FMA-based areas of implementation only really in the field not - exactly as we tend to go with multiple locations. And again, its crystal clear connection Leasing 2.0 or FMA-based equivalent of the LHC is not in the 7.9 version, which last week delivered. You just have to wait and see what we have planned next.

  • vSphere cluster sizing. A few years ago, we have to be really tight on this one, say that you should probably each vSphere cluster with XenDesktop or XenApp workloads at 8 and 16 hosts that cap. But I am now recommend (and implement) 16:08 hosts per cluster for XenDesktop workloads routinely. And as many as 24-32 hosts per cluster workloads XenApp, especially if you go with larger XenApp VM specs as you should be! said With all this, it is still advisable to scale and to use multiple clusters, but we should not these days be easy cluster to 8 hosts capping because some consultant said so in 2011.
  • PVS and MCS memory buffer. First of all, if you do not have your PVS "converted" infrastructure and updated to RCwOtD for the World Cup method and VHDX for the images themselves, then it's probably time to get it at least consider. But the more important thing here, the two applies to PVS and MCS (yes, TobiasK - we ended up there for you in 7.9!) Is that you change the default memory buffer, the PVS uses the latest mC method and now can use MCS in 7.9. I'm talking about the amount of non-paged pool memory, the PVS or MCS can effectively use I / O cache, before we spit it out to disk. While the amount of the type of workload vary, I do not recommend using the default go, as it is quite low. I recommend for XenDesktop workloads with 256-512 MB RAM go if you can ... and 2-4 GB for XenApp workloads. I have found that a large sweet spot is about 0% of the I / O cache vs plate generated by the workload in the memory. And by the way, if you learn more about it, are interested in how the new MCS-I / O optimization function works in 7.9, one of our CTPs, Andrew Morgan, here wrote a great article up -. MCS I / O optimization
  • "optimizations". XenMobile we are not just about virtualization here at Citrix - we have mobility, networking and cloud products, as it turns out to be ,. 😉 And we have done a lot more XenMobile installations than in recent times. And my colleague Ryan McClure, recently published an excellent article about XenMobile optimizations, some of the "optimization" we document basically the configuration were every provision. And I put optimizations in quotes because these improvements are really not - they are more like a better standard values ​​we soon in the product to be installed. So if you are that mobility travel are embarking, be sure to read article in its entirety, because it could save your life.

Wrap-Up

As stated in the first "new" best practices article I want all really encourage old ways of thinking and questioning some of these ancient best practices challenge. Sometimes we get it wrong at Citrix. Sometimes people our CCS could follow or implement best practices obsolete. And when we do, please let us know and we will have it repaired. Please leave me a comment below and I will make it my personal mission to get it repaired. And then we will get it in our organization communicates and community is the world a better place.

Have another "myth" or best practice that you think has changed over the years? If so, please send me a comment below. As I said a few years ago, I'm a big believer that this kind of transparency can our customers and partners to help design and implement better Citrix-enabled solutions in the future.

Cheers,
Nick

Nick Rintalan, Senior Architect - Citrix Consulting Services ( "CCS")

Citrix_Mobilize Windows_Banner 2_728x0_Static_Compete_F_072715

Previous
Next Post »
0 Komentar