Welcome! Log In Create A New Profile

Advanced

Haproxy health check interval value is not being respected

Posted by Adwait Gokhale 
Adwait Gokhale
Haproxy health check interval value is not being respected
June 13, 2018 06:40PM
Hello,

I have come across an issue with use of 'inter' parameter that sets
interval between two consecutive health checks. When configured, I found
that health checks are twice as aggressive than what is configured.

For instance, when I have haproxy with 2 backends with 'default-server
inter 10s port 80 rise 5 fall 3' I see that health checks to every
backend is at interval of 5 seconds instead of 10. With more than 2
backends, this behavior does not change.

Is this a known bug or is it a misconfiguration of some sorts? Appreciate
your help with this.

Thanks,
Adwait
On Wed, Jun 13, 2018 at 6:31 PM, Adwait Gokhale <[email protected]>
wrote:

> Hello,
>
> I have come across an issue with use of 'inter' parameter that sets
> interval between two consecutive health checks. When configured, I found
> that health checks are twice as aggressive than what is configured.
>
> For instance, when I have haproxy with 2 backends with 'default-server
> inter 10s port 80 rise 5 fall 3' I see that health checks to every
> backend is at interval of 5 seconds instead of 10. With more than 2
> backends, this behavior does not change.
>
> Is this a known bug or is it a misconfiguration of some sorts? Appreciate
> your help with this.
>
> Thanks,
> Adwait
>


Hi,

Maybe you could share your entire configuration?
That would help a lot.

Baptiste
Hi Baptiste,

Here is the haproxy configuration I have. Please let me know if you need
anything else.

global
log 127.0.0.1 local0
nbthread 2
cpu-map auto:1/1-2 0-1
maxconn 5000
tune.bufsize 18432
tune.maxrewrite 9216
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 2m # Wait up to 2 minutes for input
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers <CIPHER>
ssl-default-bind-options no-sslv3 no-tls-tickets
ssl-default-server-ciphers <CIPHER>
ssl-default-server-options no-sslv3 no-tls-tickets

defaults
log global
option splice-auto
option abortonclose
timeout connect 5s
timeout queue 5s
timeout client 60s
timeout server 60s
timeout tunnel 1h
timeout http-request 120s
timeout check 5s
option httpchk GET /graph
default-server inter 10s port 80 rise 5 fall 3
cookie DO-LB insert indirect nocache maxlife 300s maxidle 300s

frontend monitor
bind *:50054
mode http
option forwardfor
monitor-uri /haproxy_test

frontend tcp_80
bind 10.10.0.16:80
default_backend tcp_80_backend
mode tcp

backend tcp_80_backend
balance leastconn
mode tcp
server node-359413 10.36.32.32:80 check cookie node-359413
server node-359414 10.36.32.35:80 check cookie node-359414

On Sun, Jun 17, 2018 at 6:25 AM, Baptiste <[email protected]> wrote:

>
>
> On Wed, Jun 13, 2018 at 6:31 PM, Adwait Gokhale <[email protected]
> > wrote:
>
>> Hello,
>>
>> I have come across an issue with use of 'inter' parameter that sets
>> interval between two consecutive health checks. When configured, I found
>> that health checks are twice as aggressive than what is configured.
>>
>> For instance, when I have haproxy with 2 backends with 'default-server
>> inter 10s port 80 rise 5 fall 3' I see that health checks to every
>> backend is at interval of 5 seconds instead of 10. With more than 2
>> backends, this behavior does not change.
>>
>> Is this a known bug or is it a misconfiguration of some sorts? Appreciate
>> your help with this.
>>
>> Thanks,
>> Adwait
>>
>
>
> Hi,
>
> Maybe you could share your entire configuration?
> That would help a lot.
>
> Baptiste
>
Hi Adwait,

So, you have a "timeout check" set to 5s as well.
Are your servers UP and RUNNING ?
If not, then timeout check would trigger before interval, and HAProxy would
retry a health check (up to 'fall' parameter).
(timeout connect might also trigger a retry if a S/A is not received by
HAProxy)

If your servers are fully operational, can you try set 'timeout check' to
1s and see what happens?
and also, the output of 'haproxy -vv' would be interesting.

Baptiste




On Tue, Jun 26, 2018 at 7:11 PM, Adwait Gokhale <[email protected]>
wrote:

> Hi Baptiste,
>
> Here is the haproxy configuration I have. Please let me know if you need
> anything else.
>
> global
> log 127.0.0.1 local0
> nbthread 2
> cpu-map auto:1/1-2 0-1
> maxconn 5000
> tune.bufsize 18432
> tune.maxrewrite 9216
> user haproxy
> group haproxy
> daemon
> stats socket /var/run/haproxy.sock mode 600 level admin
> stats timeout 2m # Wait up to 2 minutes for input
> tune.ssl.default-dh-param 2048
> ssl-default-bind-ciphers <CIPHER>
> ssl-default-bind-options no-sslv3 no-tls-tickets
> ssl-default-server-ciphers <CIPHER>
> ssl-default-server-options no-sslv3 no-tls-tickets
>
> defaults
> log global
> option splice-auto
> option abortonclose
> timeout connect 5s
> timeout queue 5s
> timeout client 60s
> timeout server 60s
> timeout tunnel 1h
> timeout http-request 120s
> timeout check 5s
> option httpchk GET /graph
> default-server inter 10s port 80 rise 5 fall 3
> cookie DO-LB insert indirect nocache maxlife 300s maxidle 300s
>
> frontend monitor
> bind *:50054
> mode http
> option forwardfor
> monitor-uri /haproxy_test
>
> frontend tcp_80
> bind 10.10.0.16:80
> default_backend tcp_80_backend
> mode tcp
>
> backend tcp_80_backend
> balance leastconn
> mode tcp
> server node-359413 10.36.32.32:80 check cookie node-359413
> server node-359414 10.36.32.35:80 check cookie node-359414
>
> On Sun, Jun 17, 2018 at 6:25 AM, Baptiste <[email protected]> wrote:
>
>>
>>
>> On Wed, Jun 13, 2018 at 6:31 PM, Adwait Gokhale <
>> [email protected]> wrote:
>>
>>> Hello,
>>>
>>> I have come across an issue with use of 'inter' parameter that sets
>>> interval between two consecutive health checks. When configured, I found
>>> that health checks are twice as aggressive than what is configured.
>>>
>>> For instance, when I have haproxy with 2 backends with 'default-server
>>> inter 10s port 80 rise 5 fall 3' I see that health checks to every
>>> backend is at interval of 5 seconds instead of 10. With more than 2
>>> backends, this behavior does not change.
>>>
>>> Is this a known bug or is it a misconfiguration of some sorts?
>>> Appreciate your help with this.
>>>
>>> Thanks,
>>> Adwait
>>>
>>
>>
>> Hi,
>>
>> Maybe you could share your entire configuration?
>> That would help a lot.
>>
>> Baptiste
>>
>
>
Sorry, only registered users may post in this forum.

Click here to login