Little's Law :
If we consider customers C arriving at a rate R to the server and spending T time utilizing the server, we can say that C=R*T. This is known as Little’s Law.
For a given system the throughput of a system can be measured by dividing the number of users with the time spent in the box (R=C/T).
Now let’s assume that users will wait a To time in between requests, which we know as a think time. This is an interval typical for users to interact with the system. So from the C=R*T we can expand and infer that the number of users in think time will be Co=R*To.
But the number total of users in such a case will be CTotal = C+Co = R*T + R*To = R(T+To). So CTotal = R(T+To). R=C/(T+To)
where T is time spent in the Server(response time), To is the average think time, C number of users and R the throughput.
If we had a system with 200 users requesting services with 1600 request for 15 minutes and response time average of 2 seconds we can characterize that think times would be C/R-T = 200/1.77 – 2=110.2 seconds average.
Now if we wanted to reproduce the same workload characterization without think times in the test environment we could use safely the equivalence of two cases R=C/T or R=CTotal/(T+To). So C/T=CTotal/T+To. In our hypothetical case above C/2=200/(2+110). 2C+110C=400. C=4 users.
One could easily extrapolate any workload, provided there was a well known production profile or a performance test goal in mind, and apply the law correctly.
Comments
Post a Comment