Zones, latency and power brokering

7:38 PM
Zones, latency and power brokering -

With the addition of zones in XenApp / XenDesktop 7.7, I wanted to dig in the placement of power a little more about the effects of latency.

Both Craig and William have already talked about zones, but I'm going to dive in just one area. with latency brokering services

For the majority of consumers, enumerate and resources Start something they'll do every day. With the addition of zones, we allowing users to higher latency to be left, as long as a local broker.

This additional latency, it will inevitably have an impact on the end-user experience. For the majority of the work, users will do, they will see the expected slowness that is linked on round trips between the satellite and the Broker SQL database.

Start For apps, there is a pain point in actually brokering meetings. This pain point is due to the need, the lowest load VDA to bring on to start an app. This is done within a database transaction and requires a snapshot of all current loads of VDAS within the delivery group. To achieve this, a lock is taken out on all workers in delivery group, the others will stop (IE caused serialization) for the same locks take. It also waits and blocks worker status changes (eg. B. session events).

low latency, to take the delay between the locks and releasing them is very small. But as latency increases, so does the time that locks are held, thus increasing the time, meetings to broker.

To ensure this, we have looked at and run rates at a variety of latencies. The latencies are the round trip times (RTT) and were based on Verizon IP latency statistics. Note that most of the RTT are lower than the maximum values ​​listed, but we wanted to ensure that we have experimented with some useful RTT.

round-trip times of 10 ms cover most transnational delays. 45ms includes North America, Europe and Intra-Japan; 0ms covers Trans-Atlantic; 0ms covers Trans-Pacific, Latin America and Asia-Pacific; Finally 250ms EMEA Asia Pacific

covers We work with a large number of concurrent requests tested, values ​​from 12 to 60 in increments of 12 covers

. Note: The VDA sessions to simulate how the tests focused on the broker on the effects of latency. For this test, there are 57 VDAS within a delivery group. Each test attempts to launch 10,000 users.

10ms RTT results
Concurrent requests 12 24 36 48 60
Average response time (s) 0.9 1.4 1.6 2.1 2.6
brokering requests per second 14 17.8 22.9 23.2 22.9
error (% age) 0 0 0 0 0
time 10k to start searching 11m57s 9m24s 7m16s 7m11s 7m17s
[1945003Wieerwartet] is 10ms fast enough to the stresses to treat brought to the system. There were seen no errors, and is the fastest way to start searching. At the maximum starting speed of 60 concurrent users, average response times were 2.6s to start all 10k users under ~ 7m11s.

45ms RTT results
Concurrent requests 12 24 36 48 60
Average response time (s) 1.7 3.1 4.3 6.4 7.3
brokering requests per second 7.1 7.8 8.4 7.5 8.2
error (% age) 0 0 0 0.01 0.01
time 10k users 23m28s [1945015start] 21m19s 19m51s 22m15s 20m19s

with 45ms, the results were still well, at the very high start rates, 1 or 2 users saw an error. Note: , the effects of serialization can be seen in the response times to negotiate with an increase of 1.7 s to 7.3s a session. Total time to convey 10k users was 20-23m.

0ms RTT results
Concurrent requests 12 24 36 48 60
Average response time (s) 2.9 6.4 9.5 12.9 16.2
brokering requests per second 4.1 3.7 3.8 3.7 3.7
error (% age) 0 0 0 0.01 0.01
time to start 10k users 40m30s 44m29s 44m11s 44m55s 45m04s

again 0ms results saw some mistakes. However, the effects of transactions through latency is more clearly seen with the users an acceptable average time from 2.9s to mediate a meeting with 12 concurrent requests to mediate the increase to probably unacceptable 16.2s a session with 60 concurrent requests. In this case it is actually cheaper to users broker at a lower rate. To log on, all 10k users to 40-45 minutes availed

0ms RTT results
Concurrent requests 12 24 36 48 60
Average response time (s) 5.7 11.4 17.3 23.2 28.0
brokering requests per second 2.1 2.1 2.1 2.1 2.1
error (% age) 0 0 0.12 4.0 17.7
time to start 10k users 1h19m0s 1h19m27s 1h19m55s 1h20m26s N / A

with 0ms, we start with higher launch rate occur with 4% error with 48 requests and 17.7% error with 60 inquiries, approach with reaction times 30s to see significant errors. However, up to 36 requests the margin of error is 0.1% with an average Brokering time of 17s. . Note: , it is difficult to judge the start time of 60 requests, since 17% failure is hard to factor

This latency, we would not recommend 24 simultaneous accesses over. The size of the site can also be a factor logon 1k users to be ~ would take 8m. This would scale up for 10k users 1h20m. As such, we would not recommend a large site with this level of latency in the database.

to start
250ms RTT results
The simultaneous requests 12 24 36 48 60
Average response time (s) 9.3 15, 4 26.7
brokering requests per second 1.3 1.6 1.3
error (% age) 0 0 4.6 42.8 99.0
time 10k users 2h08m33s 1h46m52s 2h03m46s N / A N / a

with as high latency, a large number of timeouts at higher simultaneous occurred starting rates. In 48 requests, 42% of requests not, and 60 requests timeouts were so widespread that the site would be useless, since 99% of requests failed. This made other data not helpful and the average response time was of some successful requests.

The only acceptable introduction rates were 12 and 24 requests. It would be difficult, with this level of latency, such as logging 1k users in 13m took with 12 concurrent users and 11m recommended with 24 concurrent requests the provision of a large site. 10k user would to 2h8m.

Throttling requests

If you need to work with high latency, and find that too many timeouts occur, a registry key has been added to XenApp / XenDesktop 7.7, so it to work only a fixed number of simultaneous switching requests. All requests over the limit will require that storefront repeats the query after a few seconds. This will help recover from requests and thus reduce lock queue. However, some users may see extended to terminate implementation times, as they are always unlucky and their wishes will always backed off

The key is a DWORD and should be stored in.

HKLMSoftwareCitrixDesktopServerThrottledRequestAddressMaxConcurrentTransactions

If the key is not present, then no limitation for the placement of requests is provided. . Note: key per DDC is, so that the total requirements are divided on the SQL server must be running the remote DDCs

Summary

brokering works on latency but the latency must be for as a remote zone dimensioning. When a zone is large, it may still be desirable to maintain a database for this zone locally. If the zone is small, a remote zone can work with well and could also reduce management costs, without affecting the end-user experience.

Note that we recommend that you have your zones less than 250ms RTT, moreover, you should look up various sites.

Previous
Next Post »

1 comment

  1. Zones, Latency And Power Brokering - What Is A Vpn Connection >>>>> Download Now

    >>>>> Download Full

    Zones, Latency And Power Brokering - What Is A Vpn Connection >>>>> Download LINK

    >>>>> Download Now

    Zones, Latency And Power Brokering - What Is A Vpn Connection >>>>> Download Full

    >>>>> Download LINK eN

    ReplyDelete