Welcome! Log In Create A New Profile

Advanced

Reverse Proxy with 500k connections

Posted by larsg 
larsg
Reverse Proxy with 500k connections
March 07, 2017 09:00PM
Hi,

we are operating native nginx 1.8.1 on RHEL as a reverse proxy.
The nginx routes requests to a backend server that can be reached from the
proxy via a single internal IP address.
We have to support a large number of concurrent websocket connections - say
100k to 500k.

As we don't want to increase the number of proxy instances (with different
IPs) and we cannot use the "proxy_bind transarent" option (was introduced in
a later nginx release, upgrade is not possible) we wanted to configure the
nginx to use different source IPs then routing to the backend. Thus, we want
nginx to select an available source ip + source port when a connection is
established with the backend.

For that we assigned ten internal IPs to the proxy server and used the
proxy_bind directive bound to 0.0.0.0.
But this approach seems not to work. The nginx instance seems always use the
first IP as source IP.
Using multiple proxy_bind's is not possible.

So my question is: How can I configure nginx to select from a pool of source
IPs? Or generally: to overcome the 64k problem?

Best Regards
Lars

------- extract from config

upstream backend {
server 192.168.1.21:443;
}

server {
listen 443 ssl;
proxy_bind 0.0.0.0;

location /service {
proxy_pass https://backend;
...
}
}

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272808#msg-272808

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Nelson Marcos
Re: Reverse Proxy with 500k connections
March 07, 2017 10:20PM
Do you really need to use different source ips or it's a solution that you
picked?

Also, is it a option to set the keepalive option in your upstream configure
section?
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

Um abraço,
NM

2017-03-07 16:50 GMT-03:00 larsg <[email protected]>:

> Hi,
>
> we are operating native nginx 1.8.1 on RHEL as a reverse proxy.
> The nginx routes requests to a backend server that can be reached from the
> proxy via a single internal IP address.
> We have to support a large number of concurrent websocket connections - say
> 100k to 500k.
>
> As we don't want to increase the number of proxy instances (with different
> IPs) and we cannot use the "proxy_bind transarent" option (was introduced
> in
> a later nginx release, upgrade is not possible) we wanted to configure the
> nginx to use different source IPs then routing to the backend. Thus, we
> want
> nginx to select an available source ip + source port when a connection is
> established with the backend.
>
> For that we assigned ten internal IPs to the proxy server and used the
> proxy_bind directive bound to 0.0.0.0.
> But this approach seems not to work. The nginx instance seems always use
> the
> first IP as source IP.
> Using multiple proxy_bind's is not possible.
>
> So my question is: How can I configure nginx to select from a pool of
> source
> IPs? Or generally: to overcome the 64k problem?
>
> Best Regards
> Lars
>
> ------- extract from config
>
> upstream backend {
> server 192.168.1.21:443;
> }
>
> server {
> listen 443 ssl;
> proxy_bind 0.0.0.0;
>
> location /service {
> proxy_pass https://backend;
> ...
> }
> }
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,272808,272808#msg-272808
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Rainer Duffner
Re: Reverse Proxy with 500k connections
March 07, 2017 10:30PM
> Am 07.03.2017 um 22:12 schrieb Nelson Marcos <[email protected]>:
>
> Do you really need to use different source ips or it's a solution that you picked?
>
> Also, is it a option to set the keepalive option in your upstream configure section?
> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive <http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive>;


I’m not sure if you can proxy web socket connections like http-connections.

After all, they are persistent (hence the large number of connections).

Why can’t you (OP) do the upgrade to 1.10? I thought it’s the only „supported" version anyway?




_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Tolga Ceylan
Re: Reverse Proxy with 500k connections
March 07, 2017 11:20PM
How about using

split_clients "${remote_addr}AAA" $proxy_ip {
10% 192.168.1.10;
10% 192.168.1.11;
...
* 192.168.1.19;
}

proxy_bind $proxy_ip;

where $proxy_ip is populated via split clients module to spread the
traffic to 10 internal IPs.

or add 10 new listener ports (or ips) to your backend server instead,
(and perhaps use least connected load balancing) in upstream {} set of
10 backends. eg:

upstream backend {
least_conn;
server 192.168.1.21:443;
server 192.168.1.21:444;
server 192.168.1.21:445;
server 192.168.1.21:446;
server 192.168.1.21:447;
server 192.168.1.21:448;
server 192.168.1.21:449;
server 192.168.1.21:450;
server 192.168.1.21:451;
server 192.168.1.21:452;
}




On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner <[email protected]> wrote:
>
> Am 07.03.2017 um 22:12 schrieb Nelson Marcos <[email protected]>:
>
> Do you really need to use different source ips or it's a solution that you
> picked?
>
> Also, is it a option to set the keepalive option in your upstream configure
> section?
> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
>
>
>
>
> I’m not sure if you can proxy web socket connections like http-connections.
>
> After all, they are persistent (hence the large number of connections).
>
> Why can’t you (OP) do the upgrade to 1.10? I thought it’s the only
> „supported" version anyway?
>
>
>
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Andrei Belov
Re: Reverse Proxy with 500k connections
March 08, 2017 12:50AM
Yes, split_clients solution fits perfectly in the described use case.

Also, nginx >= 1.11.4 has support for IP_BIND_ADDRESS_NO_PORT socket
option ([1], [2]) on supported systems (Linux kernel >= 4.2, glibc >= 2.23) which
may be helpful as well.

Quote from [1]:

[..]
Add IP_BIND_ADDRESS_NO_PORT to overcome bind(0) limitations: When an
application needs to force a source IP on an active TCP socket it has to use
bind(IP, port=x). As most applications do not want to deal with already used
ports, x is often set to 0, meaning the kernel is in charge to find an
available port. But kernel does not know yet if this socket is going to be a
listener or be connected. This patch adds a new SOL_IP socket option, asking
kernel to ignore the 0 port provided by application in bind(IP, port=0) and
only remember the given IP address. The port will be automatically chosen at
connect() time, in a way that allows sharing a source port as long as the
4-tuples are unique.
[..]


[1] https://kernelnewbies.org/Linux_4.2#head-8ccffc90738ffcb0c20caa96bae6799694b8ba3a
[2] https://git.kernel.org/torvalds/c/90c337da1524863838658078ec34241f45d8394d


> On 08 Mar 2017, at 01:10, Tolga Ceylan <[email protected]> wrote:
>
> How about using
>
> split_clients "${remote_addr}AAA" $proxy_ip {
> 10% 192.168.1.10;
> 10% 192.168.1.11;
> ...
> * 192.168.1.19;
> }
>
> proxy_bind $proxy_ip;
>
> where $proxy_ip is populated via split clients module to spread the
> traffic to 10 internal IPs.
>
> or add 10 new listener ports (or ips) to your backend server instead,
> (and perhaps use least connected load balancing) in upstream {} set of
> 10 backends. eg:
>
> upstream backend {
> least_conn;
> server 192.168.1.21:443;
> server 192.168.1.21:444;
> server 192.168.1.21:445;
> server 192.168.1.21:446;
> server 192.168.1.21:447;
> server 192.168.1.21:448;
> server 192.168.1.21:449;
> server 192.168.1.21:450;
> server 192.168.1.21:451;
> server 192.168.1.21:452;
> }
>
>
>
>
> On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner <[email protected]> wrote:
>>
>> Am 07.03.2017 um 22:12 schrieb Nelson Marcos <[email protected]>:
>>
>> Do you really need to use different source ips or it's a solution that you
>> picked?
>>
>> Also, is it a option to set the keepalive option in your upstream configure
>> section?
>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
>>
>>
>>
>>
>> I’m not sure if you can proxy web socket connections like http-connections.
>>
>> After all, they are persistent (hence the large number of connections).
>>
>> Why can’t you (OP) do the upgrade to 1.10? I thought it’s the only
>> „supported" version anyway?
>>
>>
>>
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Tolga Ceylan
Re: Reverse Proxy with 500k connections
March 08, 2017 02:00AM
Of course, with split_clients, you are at the mercy of the hashing and
hope that this distribution will spread work
evenly based on incoming client address space and the duration of
these connections, so you might run into
the limits despite having enough port capacity. More importantly, in
case of failures, your clients will see
errors, since nginx will not retry (and even if it did, the hashing
will land on the same exhausted port/ip set.)

Upstream {} with multiple backends approach is a bit more robust as if
the ports are ever exhausted, nginx
can try the next upstream. And you can try to control this further by
using least_conn backend selection.


On Tue, Mar 7, 2017 at 3:39 PM, Andrei Belov <[email protected]> wrote:
> Yes, split_clients solution fits perfectly in the described use case.
>
> Also, nginx >= 1.11.4 has support for IP_BIND_ADDRESS_NO_PORT socket
> option ([1], [2]) on supported systems (Linux kernel >= 4.2, glibc >= 2.23) which
> may be helpful as well.
>
> Quote from [1]:
>
> [..]
> Add IP_BIND_ADDRESS_NO_PORT to overcome bind(0) limitations: When an
> application needs to force a source IP on an active TCP socket it has to use
> bind(IP, port=x). As most applications do not want to deal with already used
> ports, x is often set to 0, meaning the kernel is in charge to find an
> available port. But kernel does not know yet if this socket is going to be a
> listener or be connected. This patch adds a new SOL_IP socket option, asking
> kernel to ignore the 0 port provided by application in bind(IP, port=0) and
> only remember the given IP address. The port will be automatically chosen at
> connect() time, in a way that allows sharing a source port as long as the
> 4-tuples are unique.
> [..]
>
>
> [1] https://kernelnewbies.org/Linux_4.2#head-8ccffc90738ffcb0c20caa96bae6799694b8ba3a
> [2] https://git.kernel.org/torvalds/c/90c337da1524863838658078ec34241f45d8394d
>
>
>> On 08 Mar 2017, at 01:10, Tolga Ceylan <[email protected]> wrote:
>>
>> How about using
>>
>> split_clients "${remote_addr}AAA" $proxy_ip {
>> 10% 192.168.1.10;
>> 10% 192.168.1.11;
>> ...
>> * 192.168.1.19;
>> }
>>
>> proxy_bind $proxy_ip;
>>
>> where $proxy_ip is populated via split clients module to spread the
>> traffic to 10 internal IPs.
>>
>> or add 10 new listener ports (or ips) to your backend server instead,
>> (and perhaps use least connected load balancing) in upstream {} set of
>> 10 backends. eg:
>>
>> upstream backend {
>> least_conn;
>> server 192.168.1.21:443;
>> server 192.168.1.21:444;
>> server 192.168.1.21:445;
>> server 192.168.1.21:446;
>> server 192.168.1.21:447;
>> server 192.168.1.21:448;
>> server 192.168.1.21:449;
>> server 192.168.1.21:450;
>> server 192.168.1.21:451;
>> server 192.168.1.21:452;
>> }
>>
>>
>>
>>
>> On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner <[email protected]> wrote:
>>>
>>> Am 07.03.2017 um 22:12 schrieb Nelson Marcos <[email protected]>:
>>>
>>> Do you really need to use different source ips or it's a solution that you
>>> picked?
>>>
>>> Also, is it a option to set the keepalive option in your upstream configure
>>> section?
>>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
>>>
>>>
>>>
>>>
>>> I’m not sure if you can proxy web socket connections like http-connections.
>>>
>>> After all, they are persistent (hence the large number of connections).
>>>
>>> Why can’t you (OP) do the upgrade to 1.10? I thought it’s the only
>>> „supported" version anyway?
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Maxim Konovalov
Re: Reverse Proxy with 500k connections
March 08, 2017 12:20PM
On 3/8/17 3:57 AM, Tolga Ceylan wrote:
> Of course, with split_clients, you are at the mercy of the hashing and
> hope that this distribution will spread work
> evenly based on incoming client address space and the duration of
> these connections, so you might run into
> the limits despite having enough port capacity. More importantly, in
> case of failures, your clients will see
> errors, since nginx will not retry (and even if it did, the hashing
> will land on the same exhausted port/ip set.)
>
IP_BIND_ADDRESS_NO_PORT in fresh linux kernels made the trick for
nginx. This is basically why we added it not that recently.

You can find patches that work around without this feature though.

> Upstream {} with multiple backends approach is a bit more robust as if
> the ports are ever exhausted, nginx
> can try the next upstream. And you can try to control this further by
> using least_conn backend selection.
>
>
> On Tue, Mar 7, 2017 at 3:39 PM, Andrei Belov <[email protected]> wrote:
>> Yes, split_clients solution fits perfectly in the described use case.
>>
>> Also, nginx >= 1.11.4 has support for IP_BIND_ADDRESS_NO_PORT socket
>> option ([1], [2]) on supported systems (Linux kernel >= 4.2, glibc >= 2.23) which
>> may be helpful as well.
>>
>> Quote from [1]:
>>
>> [..]
>> Add IP_BIND_ADDRESS_NO_PORT to overcome bind(0) limitations: When an
>> application needs to force a source IP on an active TCP socket it has to use
>> bind(IP, port=x). As most applications do not want to deal with already used
>> ports, x is often set to 0, meaning the kernel is in charge to find an
>> available port. But kernel does not know yet if this socket is going to be a
>> listener or be connected. This patch adds a new SOL_IP socket option, asking
>> kernel to ignore the 0 port provided by application in bind(IP, port=0) and
>> only remember the given IP address. The port will be automatically chosen at
>> connect() time, in a way that allows sharing a source port as long as the
>> 4-tuples are unique.
>> [..]
>>
>>
>> [1] https://kernelnewbies.org/Linux_4.2#head-8ccffc90738ffcb0c20caa96bae6799694b8ba3a
>> [2] https://git.kernel.org/torvalds/c/90c337da1524863838658078ec34241f45d8394d
>>
>>
>>> On 08 Mar 2017, at 01:10, Tolga Ceylan <[email protected]> wrote:
>>>
>>> How about using
>>>
>>> split_clients "${remote_addr}AAA" $proxy_ip {
>>> 10% 192.168.1.10;
>>> 10% 192.168.1.11;
>>> ...
>>> * 192.168.1.19;
>>> }
>>>
>>> proxy_bind $proxy_ip;
>>>
>>> where $proxy_ip is populated via split clients module to spread the
>>> traffic to 10 internal IPs.
>>>
>>> or add 10 new listener ports (or ips) to your backend server instead,
>>> (and perhaps use least connected load balancing) in upstream {} set of
>>> 10 backends. eg:
>>>
>>> upstream backend {
>>> least_conn;
>>> server 192.168.1.21:443;
>>> server 192.168.1.21:444;
>>> server 192.168.1.21:445;
>>> server 192.168.1.21:446;
>>> server 192.168.1.21:447;
>>> server 192.168.1.21:448;
>>> server 192.168.1.21:449;
>>> server 192.168.1.21:450;
>>> server 192.168.1.21:451;
>>> server 192.168.1.21:452;
>>> }
>>>
>>>
>>>
>>>
>>> On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner <[email protected]> wrote:
>>>>
>>>> Am 07.03.2017 um 22:12 schrieb Nelson Marcos <[email protected]>:
>>>>
>>>> Do you really need to use different source ips or it's a solution that you
>>>> picked?
>>>>
>>>> Also, is it a option to set the keepalive option in your upstream configure
>>>> section?
>>>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
>>>>
>>>>
>>>>
>>>>
>>>> I’m not sure if you can proxy web socket connections like http-connections.
>>>>
>>>> After all, they are persistent (hence the large number of connections).
>>>>
>>>> Why can’t you (OP) do the upgrade to 1.10? I thought it’s the only
>>>> „supported" version anyway?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> nginx mailing list
>>>> nginx@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>> _______________________________________________
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


--
Maxim Konovalov
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Maxim Konovalov
Re: Reverse Proxy with 500k connections
March 08, 2017 12:30PM
On 3/7/17 10:50 PM, larsg wrote:
> Hi,
>
> we are operating native nginx 1.8.1 on RHEL as a reverse proxy.
> The nginx routes requests to a backend server that can be reached from the
> proxy via a single internal IP address.
> We have to support a large number of concurrent websocket connections - say
> 100k to 500k.
>
> As we don't want to increase the number of proxy instances (with different
> IPs) and we cannot use the "proxy_bind transarent" option (was introduced in
> a later nginx release, upgrade is not possible) we wanted to configure the
> nginx to use different source IPs then routing to the backend. Thus, we want
> nginx to select an available source ip + source port when a connection is
> established with the backend.
>
> For that we assigned ten internal IPs to the proxy server and used the
> proxy_bind directive bound to 0.0.0.0.
> But this approach seems not to work. The nginx instance seems always use the
> first IP as source IP.
> Using multiple proxy_bind's is not possible.
>
> So my question is: How can I configure nginx to select from a pool of source
> IPs? Or generally: to overcome the 64k problem?
>
We ever wrote a blog post for you!

https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/

As a side note: I'd really encourage all of you to add our blog rss
to your feeds. While there is some marketing "noise" we are still
trying to make it useful for tech people too.

--
Maxim Konovalov
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Tolga Ceylan
Re: Reverse Proxy with 500k connections
March 08, 2017 07:50PM
is IP_BIND_ADDRESS_NO_PORT the best solution for OP's case? Unlike the
blog post with two backends, OP's case has one backend server. If any
of the hash slots exceed the 65K port limit, there's no chance to
recover. Despite having enough port capacity, the client will receive
an error if the client ip/port hashed to a full slot.

IMHO picking bind IP based on a client ip/port hash is not very robust
in this case since
you can't really make sure you really are directing %10 of the
traffic. This solution does
not consider long connections (web sockets) and the hash slot could
get out of balance
over time.


On Wed, Mar 8, 2017 at 3:20 AM, Maxim Konovalov <[email protected]> wrote:
> On 3/7/17 10:50 PM, larsg wrote:
>> Hi,
>>
>> we are operating native nginx 1.8.1 on RHEL as a reverse proxy.
>> The nginx routes requests to a backend server that can be reached from the
>> proxy via a single internal IP address.
>> We have to support a large number of concurrent websocket connections - say
>> 100k to 500k.
>>
>> As we don't want to increase the number of proxy instances (with different
>> IPs) and we cannot use the "proxy_bind transarent" option (was introduced in
>> a later nginx release, upgrade is not possible) we wanted to configure the
>> nginx to use different source IPs then routing to the backend. Thus, we want
>> nginx to select an available source ip + source port when a connection is
>> established with the backend.
>>
>> For that we assigned ten internal IPs to the proxy server and used the
>> proxy_bind directive bound to 0.0.0.0.
>> But this approach seems not to work. The nginx instance seems always use the
>> first IP as source IP.
>> Using multiple proxy_bind's is not possible.
>>
>> So my question is: How can I configure nginx to select from a pool of source
>> IPs? Or generally: to overcome the 64k problem?
>>
> We ever wrote a blog post for you!
>
> https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/
>
> As a side note: I'd really encourage all of you to add our blog rss
> to your feeds. While there is some marketing "noise" we are still
> trying to make it useful for tech people too.
>
> --
> Maxim Konovalov
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Maxim Konovalov
Re: Reverse Proxy with 500k connections
March 09, 2017 10:40AM
This is just a matter of number of ip addresses you have in a
proxy_bind pool and suitable hash function for the split_clients map.

Adding additional logic to proxy_bind ip address selection you still
can face the same problem.

On 3/8/17 9:45 PM, Tolga Ceylan wrote:
> is IP_BIND_ADDRESS_NO_PORT the best solution for OP's case? Unlike the
> blog post with two backends, OP's case has one backend server. If any
> of the hash slots exceed the 65K port limit, there's no chance to
> recover. Despite having enough port capacity, the client will receive
> an error if the client ip/port hashed to a full slot.
>
> IMHO picking bind IP based on a client ip/port hash is not very robust
> in this case since
> you can't really make sure you really are directing %10 of the
> traffic. This solution does
> not consider long connections (web sockets) and the hash slot could
> get out of balance
> over time.
>
>
> On Wed, Mar 8, 2017 at 3:20 AM, Maxim Konovalov <[email protected]> wrote:
>> On 3/7/17 10:50 PM, larsg wrote:
>>> Hi,
>>>
>>> we are operating native nginx 1.8.1 on RHEL as a reverse proxy.
>>> The nginx routes requests to a backend server that can be reached from the
>>> proxy via a single internal IP address.
>>> We have to support a large number of concurrent websocket connections - say
>>> 100k to 500k.
>>>
>>> As we don't want to increase the number of proxy instances (with different
>>> IPs) and we cannot use the "proxy_bind transarent" option (was introduced in
>>> a later nginx release, upgrade is not possible) we wanted to configure the
>>> nginx to use different source IPs then routing to the backend. Thus, we want
>>> nginx to select an available source ip + source port when a connection is
>>> established with the backend.
>>>
>>> For that we assigned ten internal IPs to the proxy server and used the
>>> proxy_bind directive bound to 0.0.0.0.
>>> But this approach seems not to work. The nginx instance seems always use the
>>> first IP as source IP.
>>> Using multiple proxy_bind's is not possible.
>>>
>>> So my question is: How can I configure nginx to select from a pool of source
>>> IPs? Or generally: to overcome the 64k problem?
>>>
>> We ever wrote a blog post for you!
>>
>> https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/
>>
>> As a side note: I'd really encourage all of you to add our blog rss
>> to your feeds. While there is some marketing "noise" we are still
>> trying to make it useful for tech people too.
>>
>> --
>> Maxim Konovalov
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


--
Maxim Konovalov
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
larsg
Re: Reverse Proxy with 500k connections
March 09, 2017 04:00PM
Thanks for the advice.
I implemented this approach. Unfortunately not with 100% success.

When enabling sysctl option "net.ipv4.ip_nonlocal_bind = 1" it is possible
to use local IP addresses (192.168.1.130-139) as proxy_bind address.
But than using such an address (other than 0.0.0.0), nginx will produce an
error message.
Interesting aspect is: attribute "server" in the log entry is empty.
When using 0.0.0.0 as proxy_bind, everything is fine.

Do you have any ideas?

2017/03/09 14:27:09 [crit] 69765#0: *478633 connect() to 192.168.1.21:443
failed (22: Invalid argument) while connecting to upstream, client: x.x.x.x,
server: , request: "GET /myservice HTTP/1.1", upstream:
"https://192.168.1.21:443/myservice";, host: "xxxxxxx:44301"

split_clients "${remote_addr}AAAA" $proxy_ip {
# does not work
100% 192.168.1.130;

# works
100% 0.0.0.0;
}

server {
listen 44301 ssl backlog=163840;
#works
#proxy_bind 0.0.0.0;

#does not work
#proxy_bind 192.168.1.130;

proxy_bind $proxy_ip;

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272854#msg-272854

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Reinis Rozitis
RE: Reverse Proxy with 500k connections
March 09, 2017 06:10PM
> When enabling sysctl option "net.ipv4.ip_nonlocal_bind = 1" it is possible
> to use local IP addresses (192.168.1.130-139) as proxy_bind address.
> But than using such an address (other than 0.0.0.0), nginx will produce an
> error message.

Do the 192.168.1.130-139 IPs actually exist and are configured on the server?
While you can bind to the IP it doesn't mean you can make an actual tcp connection to the upstream.

net.ipv4.ip_nonlocal_bind is usually used when there is a need for a service to listen to a specific interface which doesn't exist yet on the server like in case of VRRP / Keepalived balancing etc.

rr

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
larsg
Re: Reverse Proxy with 500k connections
March 09, 2017 06:30PM
Hi everybody,

ok, I recognized another linux network problem that I solved now. Situation
now is like following:

When I call my upstream address via curl (on the nginx host) by selecting
the corresponding local interface (eth0-9 = 192.168.1.130-139) everything is
fine.
curl https://192.168.1.21:443/remote/events --insecure --interface eth0

But when I specify the same IP address that refers to eth0 (eth1-9 etc.) I
get an "110: Connection timed out".
Does anybody know this situation? Checked by sysctl config but it looks
fine...

split_clients "${remote_addr}${remote_port}AAAA" $proxy_ip {
100% 192.168.1.130;
}

server {
listen 44301 ssl backlog=163840;
proxy_bind $proxy_ip;
#proxy_bind 192.168.1.130;
...

2017/03/09 16:54:33 [error] 30081#0: *11 upstream timed out (110: Connection
timed out) while connecting to upstream, client: x.x.x.x, server: , request:
"GET /remote/events HTTP/1.1", upstream:
"https://192.168.1.21:443/remote/events";, host: "xxxxx:44301"

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272858#msg-272858

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
larsg
Re: RE: Reverse Proxy with 500k connections
March 09, 2017 07:20PM
Hi Reinis,

yes, IPs exist:

ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.130 netmask 255.255.255.0 broadcast 192.168.1.255
ether fa:16:3e:1e:ad:da txqueuelen 1000 (Ethernet)
...
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.131 netmask 255.255.255.0 broadcast 192.168.1.255

okay, I enabled net.ipv4.ip_nonlocal_bind=1 because nginx complained with an
error message "cannot bind/assign address".
net.ipv4.ip_nonlocal_bind solved that problem.
But now with our other kernel adjustments (Reverse Path Forwarding mode 2 —
Loose mode as defined in RFC 3704) this option does not have any effect.
So I disabled net.ipv4.ip_nonlocal_bind again.
But same result....upstream timed out...

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272862#msg-272862

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Konstantin Pavlov
Re: Reverse Proxy with 500k connections
March 09, 2017 08:30PM
On 09/03/2017 21:10, larsg wrote:
> Hi Reinis,
>
> yes, IPs exist:
>
> ifconfig
> eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> inet 192.168.1.130 netmask 255.255.255.0 broadcast 192.168.1.255
> ether fa:16:3e:1e:ad:da txqueuelen 1000 (Ethernet)
> ...
> eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> inet 192.168.1.131 netmask 255.255.255.0 broadcast 192.168.1.255

Are those addresses reachable outside this particular VM in your Openstack environment?

--
Konstantin Pavlov
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
larsg
Re: Reverse Proxy with 500k connections
March 13, 2017 03:30PM
Hi Guys,

we solved the problem and I wanted to give you feedback about the solution.
Finally it was an problem with our linux ip routes.
After implementing source based policy routing this nginx configuration
worked.

Thank you for your support!

Kind Regards
Lars

Summary of Solution:

split_clients "${remote_addr}${remote_port}AAAA" $source_ip {
10% 192.168.1.130;
10% 192.168.1.131;
....
* 192.168.1.139;
}

server {
listen 443 ssl backlog=163840;
proxy_bind $source_ip;
....

ip rule ls
0: from all lookup local
32754: from 192.168.1.139 lookup es-source-eth9
....
32756: from 192.168.1.130 lookup es-source-eth0
32766: from all lookup main
32767: from all lookup default

ip route list table es-source-eth9
192.168.1.0/24 dev eth9 scope link

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272915#msg-272915

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
foxgab
Re: Reverse Proxy with 500k connections
July 14, 2017 08:20AM
is upstream keepalive connetions adaptable with websocket?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,275486#msg-275486

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Sorry, only registered users may post in this forum.

Click here to login