Welcome! Log In Create A New Profile

Advanced

How to control the total requests in Ngnix

Posted by [email protected] 
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Additional: the total requests will be sent from different client ips.



Tong

发件人: tongshushan@migu.cn
发送时间: 2017-11-30 17:12
收件人: nginx
主题: How to control the total requests in Ngnix
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Francis Daly
Re: How to control the total requests in Ngnix
November 30, 2017 11:20AM
On Thu, Nov 30, 2017 at 05:12:18PM +0800, tongshushan@migu.cn wrote:

Hi there,

> I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
> The below configs are only for per client ip,not for the total requests control.

> ##########method 1##########
>
> limit_conn_zone $binary_remote_addr zone=addr:10m;

http://nginx.org/r/limit_conn_zone

If "key" is "$binary_remote_addr", it will be the same for the same
client ip, and different for different client ips; the limits apply to
each individual value of client ip (strictly: to each individual value of
"key").

If "key" is (for example) "fixed", it will be the same for every
connection, and so the limits will apply for all connections.

Note: that limits concurrent connections, not requests.

> ##########method 2##########
>
> limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;

http://nginx.org/r/limit_req_zone

Again, set "key" to something that is the same for all requests, and
the limit will apply to all requests.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
a limit of two connections per address is just a example.
What does 2000 requests mean? Is that per second? yes,it's QPS.



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:[email protected]

发件人: Gary
发送时间: 2017-11-30 17:44
收件人: nginx
主题: Re: 回复: How to control the total requests in Ngnix
I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users.

The 10 per second rate is fine, and probably about as low as you should go.

What does 2000 requests mean? Is that per second?


From: tongshushan@migu.cn
Sent: November 30, 2017 1:14 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: 回复: How to control the total requests in Ngnix

Additional: the total requests will be sent from different client ips.



Tong

发件人: tongshushan@migu.cn
发送时间: 2017-11-30 17:12
收件人: nginx
主题: How to control the total requests in Ngnix
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Francis,
what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.



Tong

From: Francis Daly
Date: 2017-11-30 18:17
To: nginx
Subject: Re: How to control the total requests in Ngnix
On Thu, Nov 30, 2017 at 05:12:18PM +0800, tongshushan@migu.cn wrote:

Hi there,

> I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
> The below configs are only for per client ip,not for the total requests control.

> ##########method 1##########
>
> limit_conn_zone $binary_remote_addr zone=addr:10m;

http://nginx.org/r/limit_conn_zone

If "key" is "$binary_remote_addr", it will be the same for the same
client ip, and different for different client ips; the limits apply to
each individual value of client ip (strictly: to each individual value of
"key").

If "key" is (for example) "fixed", it will be the same for every
connection, and so the limits will apply for all connections.

Note: that limits concurrent connections, not requests.

> ##########method 2##########
>
> limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;

http://nginx.org/r/limit_req_zone

Again, set "key" to something that is the same for all requests, and
the limit will apply to all requests.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Francis Daly
Re: Re: How to control the total requests in Ngnix
November 30, 2017 07:40PM
On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan@migu.cn wrote:

Hi there,

> what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.

Any $variable might be different in different connections. Any fixed
string will not be.

So:

limit_conn_zone "all" zone=all...

for example.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Peter Booth
Re: How to control the total requests in Ngnix
November 30, 2017 11:30PM
So what exactly are you trying to protect against?
Against “bad people” or “my website is busier than I think I can handle?”

Sent from my iPhone

> On Nov 30, 2017, at 6:52 AM, "[email protected]" <[email protected]> wrote:
>
> a limit of two connections per address is just a example.
> What does 2000 requests mean? Is that per second? yes,it's QPS.
>
> 童树山
> 咪咕视讯科技有限公司 研发部
> Mobile:13818663262
> Telephone:021-51856688(81275)
> Email:[email protected]
>
> 发件人: Gary
> 发送时间: 2017-11-30 17:44
> 收件人: nginx
> 主题: Re: 回复: How to control the total requests in Ngnix
> I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users.
>
> The 10 per second rate is fine, and probably about as low as you should go..
>
> What does 2000 requests mean? Is that per second?
>
>
> From: tongshushan@migu.cn
> Sent: November 30, 2017 1:14 AM
> To: nginx@nginx.org
> Reply-to: nginx@nginx.org
> Subject: 回复: How to control the total requests in Ngnix
>
> Additional: the total requests will be sent from different client ips.
>
> Tong
>
> 发件人: tongshushan@migu.cn
> 发送时间: 2017-11-30 17:12
> 收件人: nginx
> 主题: How to control the total requests in Ngnix
> Hi guys,
>
> I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
> The below configs are only for per client ip,not for the total requests control.
> ##########method 1##########
>
> limit_conn_zone $binary_remote_addr zone=addr:10m;
> server {
> location /mylocation/ {
> limit_conn addr 2;
> proxy_pass http://my_server/mylocation/;
> proxy_set_header Host $host:$server_port;
> }
> }
>
> ##########method 2##########
>
> limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
> server {
> location /mylocation/ {
> limit_req zone=one burst=5 nodelay;
> proxy_pass http://my_server/mylocation/;
> proxy_set_header Host $host:$server_port;
> }
> }
>
>
>
> How can I do it?
>
>
> Tong
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Here is a log of real life IP limiting with a 30 connection limit:
86.184.152.14 British Telecommunications PLC
8.37.235.199 Level 3 Communications Inc.
130.76.186.14 The Boeing Company

security.5.bz2:Nov 29 20:50:53 theranch kernel: ipfw: 5005 drop session type 40 86.184.152.14 58714 -> myip 80, 34 too many entries
security.6.bz2:Nov 29 16:01:31 theranch kernel: ipfw: 5005 drop session type 40 8.37.235.199 10363 -> myip 80, 42 too many entries
above repeated twice
security.8.bz2:Nov 29 06:39:15 theranch kernel: ipfw: 5005 drop session type 40 130.76.186.14 34056 -> myip 80, 31 too many entries
above repeated 18 times

I have an Alexa rating around 960,000. Hey, at least I made to the top one million websites. But my point is even with a limit of 30, I'm kicking out readers.

Look at the nature of the IPs. British Telecom is one of those huge ISPs where I guess different users are sharing the same IP. (Not sure.) Level 3 is the provider at many Starbucks, besides being a significant traffic carrier. Boeing has decent IP space, but maybe only a few IPs per facility. Who knows.

My point is if you set the limit at two, that is way too low.

The only real way to protect from DDOS is to use a commercial reverse proxy. I don't think limiting connection in Nginx (or in the firewall) will solve a real attack. It will probably stop some kid in his parents basement. But today you can rent DDOS attacks on the dark web.

If you really want to improve performance of your server, do severe IP filtering at the firewall. Limit the number of search engines that can read your site. Block major hosting companies and virtual private servers. There are no eyeballs there. Just VPNs (who can drop the VPN if they really want to read your site) and hackers. Easily half the internet traffic is bots.

Per some discussions on this list, it is best not to block using nginx, but rather use the firewall. Nginx parses the http request even if blocking the IP, so the CPU load isn't insignificant. As an alternative, you can use a reputation based blocking list. (I don't use one on web servers, just on email servers.)

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
my website is busier than I think I can handle



Tong

From: Peter Booth
Date: 2017-12-01 06:25
To: nginx
Subject: Re: How to control the total requests in Ngnix
So what exactly are you trying to protect against?
Against “bad people” or “my website is busier than I think I can handle?”

Sent from my iPhone

On Nov 30, 2017, at 6:52 AM, "[email protected]" <[email protected]> wrote:

a limit of two connections per address is just a example.
What does 2000 requests mean? Is that per second? yes,it's QPS.



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:[email protected]

发件人: Gary
发送时间: 2017-11-30 17:44
收件人: nginx
主题: Re: 回复: How to control the total requests in Ngnix
I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users.

The 10 per second rate is fine, and probably about as low as you should go.

What does 2000 requests mean? Is that per second?


From: tongshushan@migu.cn
Sent: November 30, 2017 1:14 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: 回复: How to control the total requests in Ngnix

Additional: the total requests will be sent from different client ips.



Tong

发件人: tongshushan@migu.cn
发送时间: 2017-11-30 17:12
收件人: nginx
主题: How to control the total requests in Ngnix
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
I configured as below:
limit_req_zone "all" zone=all:100m rate=2000r/s;
limit_req zone=all burst=100 nodelay;
but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:

2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"

Why excess: 101.000? I set it as 2000r/s ?



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:[email protected]

From: Francis Daly
Date: 2017-12-01 02:38
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan@migu.cn wrote:

Hi there,

> what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.

Any $variable might be different in different connections. Any fixed
string will not be.

So:

limit_conn_zone "all" zone=all...

for example.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Gary
Re: How to control the total requests in Ngnix
December 01, 2017 05:20AM
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
I sent the test requests from one fron only 1 server.



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:[email protected]

From: Gary
Date: 2017-12-01 12:17
To: nginx
Subject: Re: How to control the total requests in Ngnix
I thought the rate is per IP address, not for whole server.

From: tongshushan@migu.cn
Sent: November 30, 2017 7:18 PM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Re: How to control the total requests in Ngnix

I configured as below:
limit_req_zone "all" zone=all:100m rate=2000r/s;
limit_req zone=all burst=100 nodelay;
but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:

2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"

Why excess: 101.000? I set it as 2000r/s ?



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:[email protected]

From: Francis Daly
Date: 2017-12-01 02:38
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan@migu.cn wrote:

Hi there,

> what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.

Any $variable might be different in different connections. Any fixed
string will not be.

So:

limit_conn_zone "all" zone=all...

for example.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
I sent the test requests from only one client.




童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:[email protected]

From: Gary
Date: 2017-12-01 12:17
To: nginx
Subject: Re: How to control the total requests in Ngnix
I thought the rate is per IP address, not for whole server.

From: tongshushan@migu.cn
Sent: November 30, 2017 7:18 PM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Re: How to control the total requests in Ngnix

I configured as below:
limit_req_zone "all" zone=all:100m rate=2000r/s;
limit_req zone=all burst=100 nodelay;
but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:

2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"

Why excess: 101.000? I set it as 2000r/s ?



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:[email protected]

From: Francis Daly
Date: 2017-12-01 02:38
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan@migu.cn wrote:

Hi there,

> what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.

Any $variable might be different in different connections. Any fixed
string will not be.

So:

limit_conn_zone "all" zone=all...

for example.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Maxim Dounin
Re: Re: How to control the total requests in Ngnix
December 01, 2017 02:50PM
Hello!

On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan@migu.cn wrote:

> I configured as below:
> limit_req_zone "all" zone=all:100m rate=2000r/s;
> limit_req zone=all burst=100 nodelay;
> but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:
>
> 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"
>
> Why excess: 101.000? I set it as 2000r/s ?

You've configured "burst=100", and nginx starts to reject requests
when the accumulated number of requests (excess) exceeds
the configured burst size.

In short, the algorithm works as follows: every request increments
excess by 1, and decrements it according to the rate configured. If
the resulting value is greater than burst, the request is
rejected. You can read more about the algorithm used in
Wikipedia, see https://en.wikipedia.org/wiki/Leaky_bucket.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Gary
Re: How to control the total requests in Ngnix
December 01, 2017 03:50PM
Is this limiting for one connection or rate limiting for the entire server? I interpret this as a limit for one connection.

I got rid of the trailing period.
https://en.wikipedia.org/wiki/Leaky_bucket

A request is one line in the access log I assume, typically a html verb like "get". I use a single core VPS, so I don't have much CPU power. Unless the verb action is trivial, I doubt I would hit 2000/s. From experimentation, a burst of 10 gets the images going mostly unimpeded, and a rate of 10/s is where you see a page just start to slow down. I think a rate of 2000/s isn't much of a limit.






  Original Message  
From: mdounin@mdounin.ru
Sent: December 1, 2017 5:46 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Re: How to control the total requests in Ngnix

Hello!

On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan@migu.cn wrote:

> I configured as below:
> limit_req_zone "all" zone=all:100m rate=2000r/s;
> limit_req zone=all burst=100 nodelay;
> but when testing,I use tool to send the request at: Qps:486.1(not reach 2000)  I got the many many 503 error,and the error info as below:
>
>  2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"
>
> Why excess: 101.000? I set it as 2000r/s ?

You've configured "burst=100", and nginx starts to reject requests
when the accumulated number of requests (excess) exceeds
the configured burst size.

In short, the algorithm works as follows: every request increments
excess by 1, and decrements it according to the rate configured.  If
the resulting value is greater than burst, the request is
rejected.   You can read more about the algorithm used in
Wikipedia, see https://en.wikipedia.org/wiki/Leaky_bucket.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Maxim Dounin
Re: How to control the total requests in Ngnix
December 01, 2017 05:20PM
Hello!

On Fri, Dec 01, 2017 at 06:43:36AM -0800, Gary wrote:

> Is this limiting for one connection or rate limiting for the
> entire server? I interpret this as a limit for one connection.

The request limiting can be configured in multiple ways. It is
typically configured using $binary_remote_addr variable as a key
(see http://nginx.org/r/limit_req_zone), and hence the limit is
per IP address. The particular configuration uses

: limit_req_zone "all" zone=all:100m rate=2000r/s;

That is, the limit is applied for the "all" key - a static string
without any variables. This means that all requests (where
limit_req with the zone in question applies) will share the same
limit.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
I want to set the rate limiting for the entire server not for each client ip.



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:[email protected]

From: Gary
Date: 2017-12-01 22:43
To: nginx
Subject: Re: How to control the total requests in Ngnix
Is this limiting for one connection or rate limiting for the entire server? I interpret this as a limit for one connection.

I got rid of the trailing period.
https://en.wikipedia.org/wiki/Leaky_bucket

A request is one line in the access log I assume, typically a html verb like "get". I use a single core VPS, so I don't have much CPU power. Unless the verb action is trivial, I doubt I would hit 2000/s. From experimentation, a burst of 10 gets the images going mostly unimpeded, and a rate of 10/s is where you see a page just start to slow down. I think a rate of 2000/s isn't much of a limit.






Original Message
From: mdounin@mdounin.ru
Sent: December 1, 2017 5:46 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Re: How to control the total requests in Ngnix

Hello!

On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan@migu.cn wrote:

> I configured as below:
> limit_req_zone "all" zone=all:100m rate=2000r/s;
> limit_req zone=all burst=100 nodelay;
> but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:
>
> 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"
>
> Why excess: 101.000? I set it as 2000r/s ?

You've configured "burst=100", and nginx starts to reject requests
when the accumulated number of requests (excess) exceeds
the configured burst size.

In short, the algorithm works as follows: every request increments
excess by 1, and decrements it according to the rate configured. If
the resulting value is greater than burst, the request is
rejected. You can read more about the algorithm used in
Wikipedia, see https://en.wikipedia.org/wiki/Leaky_bucket.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Francis Daly
Re: Re: How to control the total requests in Ngnix
December 02, 2017 12:10PM
On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan@migu.cn wrote:

Hi there,

Others have already given some details, so I'll try to put everything
together.

> limit_req_zone "all" zone=all:100m rate=2000r/s;

The size of the zone (100m, above) relates to the number of individual
key values that the zone can store -- if you have too many values for
the size, then things can break.

In your case, you want just one key; so you can have a much smaller
zone size.

Using 100m won't break things, but it will be wasteful.


The way that nginx uses the "rate" value is not "start of second, allow
that number, block the rest until the start of the next second". It is
"turn that number into time-between-requests, and block the second
request if it is within that time of the first".

> limit_req zone=all burst=100 nodelay;

"block" can be "return error immediately", or can be "delay until the
right time", depending on what you configure. "nodelay" above means
"return error immediately".

Rather than strictly requiring a fixed time between requests always, it
can be useful to enforce an average rate; in this case, you configure
"burst" to allow that many requests as quickly as they arrive, before
delaying-or-erroring on the next ones. That is, to use different numbers:

rate=1r/s with burst=10

would mean that it would accept 10 requests all at once, but would not
accept the 11th until 10s later (in order to bring the average rate down
to 1r/s).

Note: that is not exactly what happens -- for that, read the fine source
-- but it is hopefully a clear high-level description of the intent.


And one other thing is relevant here: nginx counts in milliseconds. So
I think that you are unlikely to get useful rate limiting once you
approach 1000r/s.

> but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:
>
> 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"
>
> Why excess: 101.000? I set it as 2000r/s ?

If your tool sends all requests at once, nginx will handle "burst" before
trying to enforce your rate, and your "nodelay" means that nginx should
error immediately then.

If you remove "nodelay", then nginx should slow down processing without
sending the 503 errors.

If your tool sends one request every 0.5 ms, then nginx would have a
chance to process them all without exceeding the declared limit rate. (But
the server cannot rely on the client to behave, so the server has to be
told what to do when there is a flood of requests.)



As a way of learning how to limit requests into nginx, this is useful. As
a way of solving a specific problem that you have right now, it may or
may not be useful -- that depends on what the problem is.

Good luck with it,

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi Francis,

Thanks for help.
I might have misunderstood some concepts and rectify them here:
burst--bucket size;
rate--water leaks speed (not requests sent speed)

right?



Tong

From: Francis Daly
Date: 2017-12-02 19:02
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan@migu.cn wrote:

Hi there,

Others have already given some details, so I'll try to put everything
together.

> limit_req_zone "all" zone=all:100m rate=2000r/s;

The size of the zone (100m, above) relates to the number of individual
key values that the zone can store -- if you have too many values for
the size, then things can break.

In your case, you want just one key; so you can have a much smaller
zone size.

Using 100m won't break things, but it will be wasteful.


The way that nginx uses the "rate" value is not "start of second, allow
that number, block the rest until the start of the next second". It is
"turn that number into time-between-requests, and block the second
request if it is within that time of the first".

> limit_req zone=all burst=100 nodelay;

"block" can be "return error immediately", or can be "delay until the
right time", depending on what you configure. "nodelay" above means
"return error immediately".

Rather than strictly requiring a fixed time between requests always, it
can be useful to enforce an average rate; in this case, you configure
"burst" to allow that many requests as quickly as they arrive, before
delaying-or-erroring on the next ones. That is, to use different numbers:

rate=1r/s with burst=10

would mean that it would accept 10 requests all at once, but would not
accept the 11th until 10s later (in order to bring the average rate down
to 1r/s).

Note: that is not exactly what happens -- for that, read the fine source
-- but it is hopefully a clear high-level description of the intent.


And one other thing is relevant here: nginx counts in milliseconds. So
I think that you are unlikely to get useful rate limiting once you
approach 1000r/s.

> but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:
>
> 2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"
>
> Why excess: 101.000? I set it as 2000r/s ?

If your tool sends all requests at once, nginx will handle "burst" before
trying to enforce your rate, and your "nodelay" means that nginx should
error immediately then.

If you remove "nodelay", then nginx should slow down processing without
sending the 503 errors.

If your tool sends one request every 0.5 ms, then nginx would have a
chance to process them all without exceeding the declared limit rate. (But
the server cannot rely on the client to behave, so the server has to be
told what to do when there is a flood of requests.)



As a way of learning how to limit requests into nginx, this is useful. As
a way of solving a specific problem that you have right now, it may or
may not be useful -- that depends on what the problem is.

Good luck with it,

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Gary
Re: How to control the total requests in Ngnix
December 03, 2017 10:10PM
For what situation would it be appropriate to use "nodelay"?

  Original Message  
From: francis@daoine.org
Sent: December 2, 2017 3:02 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Re: How to control the total requests in Ngnix

On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan@migu.cn wrote:

Hi there,

Others have already given some details, so I'll try to put everything
together.

<snip>

> limit_req zone=all burst=100 nodelay;

"block" can be "return error immediately", or can be "delay until the
right time", depending on what you configure. "nodelay" above means
"return error immediately".


<snip>


f
--
Francis Daly        francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Peter Booth
Re: How to control the total requests in Ngnix
December 04, 2017 07:20AM
I’m a situation where you are confident that the workload is coming from a DDOS attack and not a real user.

For this example the limit is very low and nodelay wouldn’t seem appropriate. If you look at the techempower benchmark results you can see that a single vote VM should be able to serve over 10,000 requests per sec.

Sent from my iPhone

> On Dec 3, 2017, at 4:08 PM, Gary <[email protected]> wrote:
>
>
> For what situation would it be appropriate to use "nodelay"?
>
> Original Message
> From: francis@daoine.org
> Sent: December 2, 2017 3:02 AM
> To: nginx@nginx.org
> Reply-to: nginx@nginx.org
> Subject: Re: Re: How to control the total requests in Ngnix
>
> On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan@migu.cn wrote:
>
> Hi there,
>
> Others have already given some details, so I'll try to put everything
> together.
>
> <snip>
>
>> limit_req zone=all burst=100 nodelay;
>
> "block" can be "return error immediately", or can be "delay until the
> right time", depending on what you configure. "nodelay" above means
> "return error immediately".
>
>
> <snip>
>
>
> f
> --
> Francis Daly francis@daoine.org
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Peter Booth
Re: How to control the total requests in Ngnix
December 04, 2017 10:30AM
I’ve used the equivalent of nodelay with a rate of 2000 req/sec per IP when a retail website was being attacked by hackers. This was in combination with microcaching and CDN to protect the back end and endure the site could continue to function normally.

Sent from my iPhone

> On Dec 4, 2017, at 1:11 AM, Peter Booth <[email protected]> wrote:
>
> I’m a situation where you are confident that the workload is coming from a DDOS attack and not a real user.
>
> For this example the limit is very low and nodelay wouldn’t seem appropriate. If you look at the techempower benchmark results you can see that a single vote VM should be able to serve over 10,000 requests per sec.
>
> Sent from my iPhone
>
>> On Dec 3, 2017, at 4:08 PM, Gary <[email protected]> wrote:
>>
>>
>> For what situation would it be appropriate to use "nodelay"?
>>
>> Original Message
>> From: francis@daoine.org
>> Sent: December 2, 2017 3:02 AM
>> To: nginx@nginx.org
>> Reply-to: nginx@nginx.org
>> Subject: Re: Re: How to control the total requests in Ngnix
>>
>> On Fri, Dec 01, 2017 at 11:18:06AM +0800, tongshushan@migu.cn wrote:
>>
>> Hi there,
>>
>> Others have already given some details, so I'll try to put everything
>> together.
>>
>> <snip>
>>
>>> limit_req zone=all burst=100 nodelay;
>>
>> "block" can be "return error immediately", or can be "delay until the
>> right time", depending on what you configure. "nodelay" above means
>> "return error immediately".
>>
>>
>> <snip>
>>
>>
>> f
>> --
>> Francis Daly francis@daoine.org
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Francis Daly
Re: Re: How to control the total requests in Ngnix
December 05, 2017 10:00AM
On Sun, Dec 03, 2017 at 11:58:16AM +0800, tongshushan@migu.cn wrote:

Hi there,

> I might have misunderstood some concepts and rectify them here:
> burst--bucket size;

Yes.

> rate--water leaks speed (not requests sent speed)

Yes.

The server can control how long it waits before starting to process a
request (rate) and how many requests it will process quickly (burst). It
cannot control when the requests are sent to it.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi there,

I have combined the client ip concurrency and cluster concurrency(total requests) together as below ,seems it works as expected.

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; # one client ip concurrency
limit_req_zone "all" zone=all:20m rate=2000r/s; #cluster concurrency
server {
location /private/rush2purchase/ {
proxy_pass http://my_server/private/rush2purchase/;
proxy_set_header Host $host:$server_port;
limit_req zone=one burst=15 nodelay;
limit_req zone=all burst=3000 nodelay;
}
}

Thanks all guys.



Tong

From: Francis Daly
Date: 2017-12-05 16:51
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Sun, Dec 03, 2017 at 11:58:16AM +0800, tongshushan@migu.cn wrote:

Hi there,

> I might have misunderstood some concepts and rectify them here:
> burst--bucket size;

Yes.

> rate--water leaks speed (not requests sent speed)

Yes.

The server can control how long it waits before starting to process a
request (rate) and how many requests it will process quickly (burst). It
cannot control when the requests are sent to it.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Sorry, only registered users may post in this forum.

Click here to login