Welcome! Log In Create A New Profile

Advanced

Cannot handle more than 1,000 clients / s

Posted by Marco Colli 
Marco Colli
Cannot handle more than 1,000 clients / s
May 11, 2018 02:10PM
Hello!

Hope that this is the right place to ask.

We have a website that uses HAProxy as a load balancer and nginx in the
backend. The website is hosted on DigitalOcean (AMS2).

The problem is that - no matter the configuration or the server size - we
cannot achieve a connection rate higher than 1,000 new connections / s.
Indeed we are testing using loader.io and these are the results:
- for a session rate of 1,000 clients per second we get exactly 1,000
responses per second
- for session rates higher than that, we get long response times (e.g. 3s)
and only some hundreds of responses per second (so there is a bottleneck)
https://ldr.io/2I5hry9

Note that if we use a long http keep alive in HAProxy and the same browser
makes multiple requests we get much better results: however the problem is
that in the reality we need to handle many different clients (which make 1
or 2 requests on average), not many requests from the same client.

Currently we have this configuration:
- 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result is
the same)
- system / process limits and HAProxy configuration:
https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
- 10x nginx backend servers with 2 vCPU each

What can we improve in order to handle more than 1,000 different new
clients per second?

Any suggestion would be extremely helpful.

Have a nice day
Marco Colli
Marco Colli
Re: Cannot handle more than 1,000 clients / s
May 11, 2018 03:30PM
Another note: each nginx server in the backend can handle 8,000 new
clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and with
the same http request)

On Fri, May 11, 2018 at 2:02 PM, Marco Colli <[email protected]> wrote:

> Hello!
>
> Hope that this is the right place to ask.
>
> We have a website that uses HAProxy as a load balancer and nginx in the
> backend. The website is hosted on DigitalOcean (AMS2).
>
> The problem is that - no matter the configuration or the server size - we
> cannot achieve a connection rate higher than 1,000 new connections / s.
> Indeed we are testing using loader.io and these are the results:
> - for a session rate of 1,000 clients per second we get exactly 1,000
> responses per second
> - for session rates higher than that, we get long response times (e.g. 3s)
> and only some hundreds of responses per second (so there is a bottleneck)
> https://ldr.io/2I5hry9
>
> Note that if we use a long http keep alive in HAProxy and the same browser
> makes multiple requests we get much better results: however the problem is
> that in the reality we need to handle many different clients (which make 1
> or 2 requests on average), not many requests from the same client.
>
> Currently we have this configuration:
> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
> is the same)
> - system / process limits and HAProxy configuration:
> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
> - 10x nginx backend servers with 2 vCPU each
>
> What can we improve in order to handle more than 1,000 different new
> clients per second?
>
> Any suggestion would be extremely helpful.
>
> Have a nice day
> Marco Colli
>
>
Mihai Vintila
Re: Cannot handle more than 1,000 clients / s
May 11, 2018 03:40PM
Check how many connections you have opened on the private side(i.e.
between haproxy and nginx), i'm thinking that there are not closing fast
enough and you are reaching the limit.

Best regards,
Mihai

On 5/11/2018 4:26 PM, Marco Colli wrote:
> Another note: each nginx server in the backend can handle 8,000 new
> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
> with the same http request)
>
> On Fri, May 11, 2018 at 2:02 PM, Marco Colli <[email protected]
> <mailto:[email protected]>> wrote:
>
> Hello!
>
> Hope that this is the right place to ask.
>
> We have a website that uses HAProxy as a load balancer and nginx
> in the backend. The website is hosted on DigitalOcean (AMS2).
>
> The problem is that - no matter the configuration or the server
> size - we cannot achieve a connection rate higher than 1,000 new
> connections / s. Indeed we are testing using loader.io
> http://loader.io and these are the results:
> - for a session rate of 1,000 clients per second we get exactly
> 1,000 responses per second
> - for session rates higher than that, we get long response times
> (e.g. 3s) and only some hundreds of responses per second (so there
> is a bottleneck) https://ldr.io/2I5hry9 https://ldr.io/2I5hry9
>
> Note that if we use a long http keep alive in HAProxy and the same
> browser makes multiple requests we get much better results:
> however the problem is that in the reality we need to handle many
> different clients (which make 1 or 2 requests on average), not
> many requests from the same client.
>
> Currently we have this configuration:
> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the
> result is the same)
> - system / process limits and HAProxy configuration:
> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
> - 10x nginx backend servers with 2 vCPU each
>
> What can we improve in order to handle more than 1,000 different
> new clients per second?
>
> Any suggestion would be extremely helpful.
>
> Have a nice day
> Marco Colli
>
>
Marco Colli
Re: Cannot handle more than 1,000 clients / s
May 11, 2018 04:40PM
>
> how many connections you have opened on the private side


Thanks for the reply! What should I do exactly? Can you see it from HAProxy
stats? I have taken two screenshots (see attachments) during the load test
(30s, 2,000 client/s)

here are not closing fast enough and you are reaching the limit.


What can I do to improve that?




On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila <[email protected]> wrote:

> Check how many connections you have opened on the private side(i.e.
> between haproxy and nginx), i'm thinking that there are not closing fast
> enough and you are reaching the limit.
>
> Best regards,
> Mihai
>
> On 5/11/2018 4:26 PM, Marco Colli wrote:
>
> Another note: each nginx server in the backend can handle 8,000 new
> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
> with the same http request)
>
> On Fri, May 11, 2018 at 2:02 PM, Marco Colli <[email protected]>
> wrote:
>
>> Hello!
>>
>> Hope that this is the right place to ask.
>>
>> We have a website that uses HAProxy as a load balancer and nginx in the
>> backend. The website is hosted on DigitalOcean (AMS2).
>>
>> The problem is that - no matter the configuration or the server size - we
>> cannot achieve a connection rate higher than 1,000 new connections / s.
>> Indeed we are testing using loader.io and these are the results:
>> - for a session rate of 1,000 clients per second we get exactly 1,000
>> responses per second
>> - for session rates higher than that, we get long response times (e.g.
>> 3s) and only some hundreds of responses per second (so there is a
>> bottleneck) https://ldr.io/2I5hry9
>>
>> Note that if we use a long http keep alive in HAProxy and the same
>> browser makes multiple requests we get much better results: however the
>> problem is that in the reality we need to handle many different clients
>> (which make 1 or 2 requests on average), not many requests from the same
>> client.
>>
>> Currently we have this configuration:
>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
>> is the same)
>> - system / process limits and HAProxy configuration:
>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>> - 10x nginx backend servers with 2 vCPU each
>>
>> What can we improve in order to handle more than 1,000 different new
>> clients per second?
>>
>> Any suggestion would be extremely helpful.
>>
>> Have a nice day
>> Marco Colli
>>
>>
>
Attachments:
open | download - Screen Shot 2018-05-11 at 16.21.00.png (269.4 KB)
open | download - Screen Shot 2018-05-11 at 16.20.53.png (268 KB)
Baptiste
Re: Cannot handle more than 1,000 clients / s
May 11, 2018 05:10PM
Hi Marco,

I see you enabled compression in your HAProxy configuration. Maybe you want
to disable it and re-run a test just to see (though I don't expect any
improvement since you seem to have some free CPU cycles on the machine).
Maybe you can run a "top" showing each CPU usage, so we can see how much
time is spent in SI and in userland.
I saw you're doing http-server-close. Is there any good reason for that?
The maxconn on your frontend seem too low too compared to your target
traffic (despite the 5000 will apply to each process).
Last, I would create 4 bind lines, one per process, like this in your
frontend:
bind :80 process 1
bind :80 process 2
...

Maybe one of your process is being saturated and you don't see it . The
configuration above will ensure an even load distribution of the incoming
connections to the HAProxy process.

Baptiste


On Fri, May 11, 2018 at 4:29 PM, Marco Colli <[email protected]> wrote:

> how many connections you have opened on the private side
>
>
> Thanks for the reply! What should I do exactly? Can you see it from
> HAProxy stats? I have taken two screenshots (see attachments) during the
> load test (30s, 2,000 client/s)
>
> here are not closing fast enough and you are reaching the limit.
>
>
> What can I do to improve that?
>
>
>
>
> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila <[email protected]> wrote:
>
>> Check how many connections you have opened on the private side(i.e.
>> between haproxy and nginx), i'm thinking that there are not closing fast
>> enough and you are reaching the limit.
>>
>> Best regards,
>> Mihai
>>
>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>
>> Another note: each nginx server in the backend can handle 8,000 new
>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>> with the same http request)
>>
>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli <[email protected]>
>> wrote:
>>
>>> Hello!
>>>
>>> Hope that this is the right place to ask.
>>>
>>> We have a website that uses HAProxy as a load balancer and nginx in the
>>> backend. The website is hosted on DigitalOcean (AMS2).
>>>
>>> The problem is that - no matter the configuration or the server size -
>>> we cannot achieve a connection rate higher than 1,000 new connections / s.
>>> Indeed we are testing using loader.io and these are the results:
>>> - for a session rate of 1,000 clients per second we get exactly 1,000
>>> responses per second
>>> - for session rates higher than that, we get long response times (e.g.
>>> 3s) and only some hundreds of responses per second (so there is a
>>> bottleneck) https://ldr.io/2I5hry9
>>>
>>> Note that if we use a long http keep alive in HAProxy and the same
>>> browser makes multiple requests we get much better results: however the
>>> problem is that in the reality we need to handle many different clients
>>> (which make 1 or 2 requests on average), not many requests from the same
>>> client.
>>>
>>> Currently we have this configuration:
>>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
>>> is the same)
>>> - system / process limits and HAProxy configuration:
>>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>>> - 10x nginx backend servers with 2 vCPU each
>>>
>>> What can we improve in order to handle more than 1,000 different new
>>> clients per second?
>>>
>>> Any suggestion would be extremely helpful.
>>>
>>> Have a nice day
>>> Marco Colli
>>>
>>>
>>
>
Marco Colli
Re: Cannot handle more than 1,000 clients / s
May 11, 2018 05:40PM
>
> Solution is to have more than one ip on the backend and a round robin when
> sending to the backends.


What do you mean exactly? I already use round robin (as you can see in the
config file linked previously) and in the backend I have 10 different
servers with 10 different IPs

sysctl net.ipv4.ip_local_port_range


Currently I have ~30,000 ports available... they should be enough for 2,000
clients / s. Note that the number during the test is kept constant to 2,000
client (the number of connected clients is not cumulative / does not
increase during the test).
In any case I have also tested increasing the number of ports to 64k and
run a load test, but nothing changes.

You are probably keeping it opened for around 60 seconds and thus the limit


No, on the backend side I use http-server-close. On the client side the
number is constant to 2k clients during the test and in any case I have
http keep alive timeout set to 500ms.


On Fri, May 11, 2018 at 4:51 PM, Mihai Vintila <[email protected]> wrote:

> You can not have too many open ports . Once a new connections comes to
> haproxy on the backend it'll initiate a new connection to the nginx. Each
> new connections opens a local port, and ports are limited by sysctl
> net.ipv4.ip_local_port_range . So even if you set it to 1024 65535 you
> still have only ~ 64000 sessions. Solution is to have more than one ip on
> the backend and a round robin when sending to the backends. This way you'll
> have for each backend ip on the haproxy 64000 sessions. Alternatively make
> sure that you are not keeping the connections opened for too long . You are
> probably keeping it opened for around 60 seconds and thus the limit. As you
> can see you have 61565 sessions in the screenshots provided. Other limit
> could be the file descriptors but seems that this is set to 200k
>
> Best regards,
> Mihai Vintila
>
> On 5/11/2018 5:29 PM, Marco Colli wrote:
>
> how many connections you have opened on the private side
>
>
> Thanks for the reply! What should I do exactly? Can you see it from
> HAProxy stats? I have taken two screenshots (see attachments) during the
> load test (30s, 2,000 client/s)
>
> here are not closing fast enough and you are reaching the limit.
>
>
> What can I do to improve that?
>
>
>
>
> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila <[email protected]> wrote:
>
>> Check how many connections you have opened on the private side(i.e.
>> between haproxy and nginx), i'm thinking that there are not closing fast
>> enough and you are reaching the limit.
>>
>> Best regards,
>> Mihai
>>
>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>
>> Another note: each nginx server in the backend can handle 8,000 new
>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>> with the same http request)
>>
>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli <[email protected]>
>> wrote:
>>
>>> Hello!
>>>
>>> Hope that this is the right place to ask.
>>>
>>> We have a website that uses HAProxy as a load balancer and nginx in the
>>> backend. The website is hosted on DigitalOcean (AMS2).
>>>
>>> The problem is that - no matter the configuration or the server size -
>>> we cannot achieve a connection rate higher than 1,000 new connections / s.
>>> Indeed we are testing using loader.io and these are the results:
>>> - for a session rate of 1,000 clients per second we get exactly 1,000
>>> responses per second
>>> - for session rates higher than that, we get long response times (e.g.
>>> 3s) and only some hundreds of responses per second (so there is a
>>> bottleneck) https://ldr.io/2I5hry9
>>>
>>> Note that if we use a long http keep alive in HAProxy and the same
>>> browser makes multiple requests we get much better results: however the
>>> problem is that in the reality we need to handle many different clients
>>> (which make 1 or 2 requests on average), not many requests from the same
>>> client.
>>>
>>> Currently we have this configuration:
>>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
>>> is the same)
>>> - system / process limits and HAProxy configuration:
>>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>>> - 10x nginx backend servers with 2 vCPU each
>>>
>>> What can we improve in order to handle more than 1,000 different new
>>> clients per second?
>>>
>>> Any suggestion would be extremely helpful.
>>>
>>> Have a nice day
>>> Marco Colli
>>>
>>>
>>
>
Jarno Huuskonen
Re: Cannot handle more than 1,000 clients / s
May 11, 2018 05:50PM
Hi,

On Fri, May 11, Marco Colli wrote:
> Hope that this is the right place to ask.
>
> We have a website that uses HAProxy as a load balancer and nginx in the
> backend. The website is hosted on DigitalOcean (AMS2).
>
> The problem is that - no matter the configuration or the server size - we
> cannot achieve a connection rate higher than 1,000 new connections / s.
> Indeed we are testing using loader.io and these are the results:
> - for a session rate of 1,000 clients per second we get exactly 1,000
> responses per second
> - for session rates higher than that, we get long response times (e.g. 3s)
> and only some hundreds of responses per second (so there is a bottleneck)
> https://ldr.io/2I5hry9

Is your load tester using https connections or http (probably https,
since you have redirect scheme https if !{ ssl_fc }) ? If https and each
connection renegotiates tls then there's a chance you are testing how
fast your VM can do tls negot.

Running top / htop should show if userspace uses all cpu.

Do you get better results if you'll use http instead of https ?

-Jarno

--
Jarno Huuskonen
Marco Colli
Re: Cannot handle more than 1,000 clients / s
May 11, 2018 05:50PM
>
> Maybe you want to disable it


Thanks for the reply! I have already tried that and doesn't help.

Maybe you can run a "top" showing each CPU usage, so we can see how much
> time is spent in SI and in userland


During the test the CPU usage is pretty constant and the values are these:


%Cpu0 :* 65.1 *us,* 5.0 *sy,* 0.0 *ni,* 29.9 *id,* 0.0 *wa,* 0.0 *hi,*
0.0 *si,* 0.0 *st

%Cpu1 :* 49.0 *us,* 6.3 *sy,* 0.0 *ni,* 30.3 *id,* 0.0 *wa,* 0.0 *hi,*
14.3 *si,* 0.0 *st

%Cpu2 :* 67.7 *us,* 4.0 *sy,* 0.0 *ni,* 24.8 *id,* 0.0 *wa,* 0.0 *hi,*
3.6 *si,* 0.0 *st

%Cpu3 :* 72.1 *us,* 6.0 *sy,* 0.0 *ni,* 21.9 *id,* 0.0 *wa,* 0.0 *hi,*
0.0 *si,* 0.0 *st


I saw you're doing http-server-close. Is there any good reason for that?


I need to handle different requests from different clients (I am not
interested in keep alive, since clients usually make just 1 or 2 requests).
So I think that http-server-close doesn't matter because it is used only
for multiple request *from the same client*.

The maxconn on your frontend seem too low too compared to your target
> traffic (despite the 5000 will apply to each process).


It is 5,000 * 4 = 20,000 which should be enough for a test with 2,000
clients. In any case I have also tried to increase it to 25,000 per process
and the performance are the same in the load tests.

Last, I would create 4 bind lines, one per process, like this in your
> frontend:
> bind :80 process 1
> bind :80 process 2
>

Do you mean bind-process? The HAProxy docs say that when bind-process is
not present is the same as bind-process all, so I think that it is useless
to write it explicitly.


On Fri, May 11, 2018 at 4:58 PM, Baptiste <[email protected]> wrote:

> Hi Marco,
>
> I see you enabled compression in your HAProxy configuration. Maybe you
> want to disable it and re-run a test just to see (though I don't expect any
> improvement since you seem to have some free CPU cycles on the machine).
> Maybe you can run a "top" showing each CPU usage, so we can see how much
> time is spent in SI and in userland.
> I saw you're doing http-server-close. Is there any good reason for that?
> The maxconn on your frontend seem too low too compared to your target
> traffic (despite the 5000 will apply to each process).
> Last, I would create 4 bind lines, one per process, like this in your
> frontend:
> bind :80 process 1
> bind :80 process 2
> ...
>
> Maybe one of your process is being saturated and you don't see it . The
> configuration above will ensure an even load distribution of the incoming
> connections to the HAProxy process.
>
> Baptiste
>
>
> On Fri, May 11, 2018 at 4:29 PM, Marco Colli <[email protected]>
> wrote:
>
>> how many connections you have opened on the private side
>>
>>
>> Thanks for the reply! What should I do exactly? Can you see it from
>> HAProxy stats? I have taken two screenshots (see attachments) during the
>> load test (30s, 2,000 client/s)
>>
>> here are not closing fast enough and you are reaching the limit.
>>
>>
>> What can I do to improve that?
>>
>>
>>
>>
>> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila <[email protected]> wrote:
>>
>>> Check how many connections you have opened on the private side(i.e.
>>> between haproxy and nginx), i'm thinking that there are not closing fast
>>> enough and you are reaching the limit.
>>>
>>> Best regards,
>>> Mihai
>>>
>>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>>
>>> Another note: each nginx server in the backend can handle 8,000 new
>>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>>> with the same http request)
>>>
>>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli <[email protected]>
>>> wrote:
>>>
>>>> Hello!
>>>>
>>>> Hope that this is the right place to ask.
>>>>
>>>> We have a website that uses HAProxy as a load balancer and nginx in the
>>>> backend. The website is hosted on DigitalOcean (AMS2).
>>>>
>>>> The problem is that - no matter the configuration or the server size -
>>>> we cannot achieve a connection rate higher than 1,000 new connections / s.
>>>> Indeed we are testing using loader.io and these are the results:
>>>> - for a session rate of 1,000 clients per second we get exactly 1,000
>>>> responses per second
>>>> - for session rates higher than that, we get long response times (e.g.
>>>> 3s) and only some hundreds of responses per second (so there is a
>>>> bottleneck) https://ldr.io/2I5hry9
>>>>
>>>> Note that if we use a long http keep alive in HAProxy and the same
>>>> browser makes multiple requests we get much better results: however the
>>>> problem is that in the reality we need to handle many different clients
>>>> (which make 1 or 2 requests on average), not many requests from the same
>>>> client.
>>>>
>>>> Currently we have this configuration:
>>>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the
>>>> result is the same)
>>>> - system / process limits and HAProxy configuration:
>>>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>>>> - 10x nginx backend servers with 2 vCPU each
>>>>
>>>> What can we improve in order to handle more than 1,000 different new
>>>> clients per second?
>>>>
>>>> Any suggestion would be extremely helpful.
>>>>
>>>> Have a nice day
>>>> Marco Colli
>>>>
>>>>
>>>
>>
>
Marco Colli
Re: Cannot handle more than 1,000 clients / s
May 11, 2018 06:00PM
>
> Do you get better results if you'll use http instead of https ?


I already tested it yesterday and the results are pretty much the same
(only a very small improvement, which is expected, but not a substantial
change).

Running top / htop should show if userspace uses all cpu.


During the test the CPU usage is this:


%Cpu0 :* 65.1 *us,* 5.0 *sy,* 0.0 *ni,* 29.9 *id,* 0.0 *wa,* 0.0 *hi,*
0.0 *si,* 0.0 *st

%Cpu1 :* 49.0 *us,* 6.3 *sy,* 0.0 *ni,* 30.3 *id,* 0.0 *wa,* 0.0 *hi,
* 14.3 *si,* 0.0 *st

%Cpu2 :* 67.7 *us,* 4.0 *sy,* 0.0 *ni,* 24.8 *id,* 0.0 *wa,* 0.0 *hi,*
3.6 *si,* 0.0 *st

%Cpu3 :* 72.1 *us,* 6.0 *sy,* 0.0 *ni,* 21.9 *id,* 0.0 *wa,* 0.0 *hi,*
0.0 *si,* 0.0 *st


Also note that when I increase the number of CPUs and HAProxy processes I
don't get any benefit on performance (and the CPU usage is much lower).


On Fri, May 11, 2018 at 5:45 PM, Jarno Huuskonen <[email protected]>
wrote:

> Hi,
>
> On Fri, May 11, Marco Colli wrote:
> > Hope that this is the right place to ask.
> >
> > We have a website that uses HAProxy as a load balancer and nginx in the
> > backend. The website is hosted on DigitalOcean (AMS2).
> >
> > The problem is that - no matter the configuration or the server size - we
> > cannot achieve a connection rate higher than 1,000 new connections / s.
> > Indeed we are testing using loader.io and these are the results:
> > - for a session rate of 1,000 clients per second we get exactly 1,000
> > responses per second
> > - for session rates higher than that, we get long response times (e.g.
> 3s)
> > and only some hundreds of responses per second (so there is a bottleneck)
> > https://ldr.io/2I5hry9
>
> Is your load tester using https connections or http (probably https,
> since you have redirect scheme https if !{ ssl_fc }) ? If https and each
> connection renegotiates tls then there's a chance you are testing how
> fast your VM can do tls negot.
>
> Running top / htop should show if userspace uses all cpu.
>
> Do you get better results if you'll use http instead of https ?
>
> -Jarno
>
> --
> Jarno Huuskonen
>
Jarno Huuskonen
Re: Cannot handle more than 1,000 clients / s
May 12, 2018 04:00PM
Hi,

On Fri, May 11, Marco Colli wrote:
> >
> > Do you get better results if you'll use http instead of https ?
>
>
> I already tested it yesterday and the results are pretty much the same
> (only a very small improvement, which is expected, but not a substantial
> change).

Couple of things to check:
- first: can you test serving the response straight from haproxy,
something like:
frontend www-frontend
...
http-request deny deny_status 200

- second: from the stats screen captures you sent looks like
"backend www-backend" is limited to 500 sessions, try increasing
backend fullconn
(https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-fullconn)

Are you running haproxy 1.6.3 ? It's pretty old (December 2015).

-Jarno

--
Jarno Huuskonen
Daniel
Re: Cannot handle more than 1,000 clients / s
May 12, 2018 04:10PM
Hi,

maybe you need to increase ulimit and max connections in haproxy config.

Am 12.05.18, 15:54 schrieb "Jarno Huuskonen" <[email protected]>:

Hi,

On Fri, May 11, Marco Colli wrote:
> >
> > Do you get better results if you'll use http instead of https ?
>
>
> I already tested it yesterday and the results are pretty much the same
> (only a very small improvement, which is expected, but not a substantial
> change).

Couple of things to check:
- first: can you test serving the response straight from haproxy,
something like:
frontend www-frontend
...
http-request deny deny_status 200

- second: from the stats screen captures you sent looks like
"backend www-backend" is limited to 500 sessions, try increasing
backend fullconn
(https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-fullconn)

Are you running haproxy 1.6.3 ? It's pretty old (December 2015).

-Jarno

--
Jarno Huuskonen
Marco Colli
Re: Cannot handle more than 1,000 clients / s
May 13, 2018 05:20PM
Thanks, I didn't see that low value... however it's not that the problem,
because that value is ignored in my case, since I don't use minconn:
https://discourse.haproxy.org/t/backend-sessions-limit-200/1661
Basically fullconn is useful only if you set minconn (not my case),
otherwise it is ignored.

On Sat, May 12, 2018 at 3:53 PM, Jarno Huuskonen <jarno.huus[email protected]>
wrote:

> Hi,
>
> On Fri, May 11, Marco Colli wrote:
> > >
> > > Do you get better results if you'll use http instead of https ?
> >
> >
> > I already tested it yesterday and the results are pretty much the same
> > (only a very small improvement, which is expected, but not a substantial
> > change).
>
> Couple of things to check:
> - first: can you test serving the response straight from haproxy,
> something like:
> frontend www-frontend
> ...
> http-request deny deny_status 200
>
> - second: from the stats screen captures you sent looks like
> "backend www-backend" is limited to 500 sessions, try increasing
> backend fullconn
> (https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-
> fullconn)
>
> Are you running haproxy 1.6.3 ? It's pretty old (December 2015).
>
> -Jarno
>
> --
> Jarno Huuskonen
>
Sorry, only registered users may post in this forum.

Click here to login