0

In my testing, both min_wait and max_wait are set to 1 second, and I set users to 100, so I expect the reqs/sec to be close to 100.

I know Locust actually need to wait server respond and then send the next request. Even though, if server respond quick, like 20ms, the outcome TPS should be close to 100, like 92 maybe.

But, in actuality it is 10, as the following picture shows:

screenshot

What am I missing?

My code is below:

class UserBehavior(TaskSet):   

    @task(1)
    def list_teacher(self):
        self.client.get("/api/mgr/sq_mgr/?action=list_teacher&pagenum=1&pagesize=100")

    @task(1)
    def list_course(self):
        self.client.get("/api/mgr/sq_mgr/?action=list_course&pagenum=1&pagesize=20")


class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait = 1000
    max_wait = 1000
5
  • This is the exact same question as stackoverflow.com/q/53737188/10653038 Commented Dec 13, 2018 at 1:47
  • The anwser in that post does not convince me. In my testing, you could find my server respond very quick from statistics of avrage response time. But still TPS is much lower than I expected, even it is not like 100 TPS, it should be like 90 TPS, why 10 TPS? Commented Dec 14, 2018 at 6:13
  • What’s your span rate? Commented Dec 14, 2018 at 7:39
  • Who knows @Jcyrss - it could be a lot of reasons. Most people assume their service is much faster and scalable than it is (almost all the time I see questions on this as a Locustio maintainer the service ends up as the problem). Commented Dec 16, 2018 at 21:56
  • @Siyu, I tried with many hatch rates from 10 to 100 per second, the TPS could not reach close to 100 after pretty long time. Commented Mar 28, 2019 at 3:21

1 Answer 1

0

I replicated your scenario with a minimal service that answers after 10ms sleep and was able to reach 98 req/s.

Name                                                          # reqs      # fails     Avg     Min     Max  |  Median   req/s
--------------------------------------------------------------------------------------------------------------------------------------------
 POST call1                                                      1754     0(0.00%)      20      13      34  |      20   47.90
 POST call2                                                      1826     0(0.00%)      20      13      30  |      20   50.10
--------------------------------------------------------------------------------------------------------------------------------------------
 Total                                                           3580     0(0.00%)                                      98.00

So the parameters are fine.

What could be the reasons for lower numbers:

  • The service itself is slow to answer
  • Amount of parallel requests is limited. Maybe you have a thread pool of size 5 on critical path, that would limit your rps.
  • Network latency is not accounted. Locust will initiate the wait only after task completion, so if the service answers in 10ms, but you have 90ms roundtrip time, you are getting 100ms end to end. I bet on this one, especially if you're load testing some server from your local machine.
  • Locust itself might be slow. It's a python after all. For me it gets capped at ~550 rps (pretty low I would say), because IO event loop reaches 100% of one core.
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.