-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jetty reactive client bottle neck for spring reactive flapmap operator #228
Comments
I don't think so. Jetty's I would start by using it directly rather than using Once you achieve the desired number of requests/s you can reintroduce reactive and/or |
I doubt it has anything to do with the |
Thanks to your comments. Just adding some more facts here: I tried this (note the concurrency number)
And could see in the logs threads like :
i.e, with flatMap set to 4, it seems there are 4 threads from Jetty HttpClient (please correct me if I am wrong) Now, if I increase to 8 Therefore, I think there is some kind of correlation between the flatMap concurrency number and this [HttpClient@abc-N] However, as I scale up, 16, 32, 64, 128 [...] at some point I am not able to see the same number of [HttpClient@abc-N]. With concrete examples, if I set flapMap at 4096, I would have expected 4096ish different [HttpClient@abc-1] [HttpClient@abc-4096] However, not even a hundred could be observed. May I ask if this is expected? |
I'm not sure what that parameter does exactly, so I cannot comment. If you're trying to perform some kind of load testing, I feel you are taking the wrong way. |
No problem at all, and again, thank you for all the responses. I am not doing some kind of load testing, this is a real production level business use case "hit the third party server as hard as possible" I believe I have found further clues. I am now doing trials and errors (mostly errors, the parameters are a bit overwhelming) Adding: (note the custom QueuedThreadPool)
Issue 1: with this QueuedThreadPool set to 1000, I would have expected to see [client-thread-1] to [client-thread-1000] However, I am only seeing
Is the property max number of thread not being picked up? Also, may I ask what is the threadPool, QueuedThreadPool, ExecutorThreadPool, another thread pool? That should be the best fit to this production use case (not load testing) which is to span as much thread as possible in order to send as much requests as possible please? |
I think you have wrong expectations.
You are basically trying to max out 2 systems, so it is load testing. I think You need to carefully monitor CPU, network, JVM and application to understand what's going on. Please read: https://github.com/jetty-project/jetty-load-generator/blob/2.1.x/README.md |
Understood @sbordet This is very unfortunate, using this reactive paradigm, we use little little little resource
We are only using a very low amount of CPU and memory, using this Jetty, to send some 16 requests per seconds, with a server confirming to receive 16ish requests per seconds (while it should be able to handle 8000/s), while the flux of data incoming in piling up and Jetty is not able to send them May I ask if there are some documentations on the executor, thread pool and selector please? |
Look, it's not Jetty. The documentation is here: Make sure you have a large Then make sure you actually send the requests without waiting for the responses. As I said multiple times now, start with Jetty |
What I am trying to achieve:
Send as many http requests as possible, in parallel, to a very reliable third party service from an aggressive Flux
Background:
The third party service is very reliable and can sustain a very high number of requests. So far, I am the only client of this third party service. I am invited to hit the third party server as hard as possible. The third party service, which I have no control over, does not offer any bulk/list API. I can only send requests one by one. On their side, each request takes a constant one second to process.
This third party API has keep-alive enabled. But does not support GRPC, http2, socket. There is no rate limit at any point of the end to end flow.
What did I try:
Here is the configuration of my http client, as well as the logic to send the http requests (hopefully as many requests, as fast as possible)
client
the flux:
Using this, I asked the third party service and they gave me a number N, my number of requests per second.
First observation, the number of concurrency for flatmap here is 4096. And since the third party takes one second to process the request, I would have expected a rate N of 4096 requests per second.
However, I am nowhere close. The third party service told me I am at 16ish requests per second.
Issue:
I believe the underlying limitation comes from jetty http client.
I interchanged the webclient (which uses jetty) with a dummy operation, and could see much higher throughput.
I believe the issue here is that the scheduling policy of the jetty-reactive-httpclient library is limiting the throughput.
What parameters, the number of concurrent connections, possibly IO threads, keep alive, I should use in order to "unleash" jetty reactive http client?
The text was updated successfully, but these errors were encountered: