Welcome! Log In Create A New Profile

Advanced

Version 1.5.12, possible issue with backup server

Posted by Shawn Heisey 
Shawn Heisey
Version 1.5.12, possible issue with backup server
April 15, 2018 07:20PM
Kernel info:

[email protected]:/etc/haproxy# uname -a
Linux lb1 3.13.0-52-generic #86-Ubuntu SMP Mon May 4 04:32:59 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux


Here's the haproxy version output:

[email protected]:/etc/haproxy# haproxy -vv
HA-Proxy version 1.5.12 2015/05/02
Copyright 2000-2015 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux2628
  CPU     = native
  CC      = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE= USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
Running on OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (libpcre build without JIT?)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.



The configuration I had is with a backend that has two servers, one of
them tagged as backup. This is the actual config:

backend be-cdn-9000
        description Back end for the thumbs CDN
        cookie MSDSRVHA insert indirect nocache
        server planet 10.100.2.123:9000 weight 100 cookie planet track
chk-cdn-9000/planet
        server hollywood 10.100.2.124:9000 weight 100 backup cookie
hollywood track chk-cdn-9000/hollywood


My check interval is 9990 milliseconds and the check timeout is 10 seconds.

I did not do any precise timing, but it seems to take about ten seconds
between the primary server going down and the backup server changing to
active, and during that time, clients get "no server available" errors.

It is my expectation that as soon as the primary server goes down,
haproxy will promote the backup server(s) with the highest weight to
active and send requests there.

I know that I'm running an out of date version and can't expect any
changes in 1.5.x.

As a workaround, I have removed the "backup" keyword.  For this backend
that's an acceptable solution, but I am using that configuration
elsewhere in cases where I only want the traffic to go to one server as
long as that server is up.

Thanks,
Shawn
Sorry, only registered users may post in this forum.

Click here to login