Welcome! Log In Create A New Profile

Advanced

limit_req per subnet?

Posted by Grant 
Grant
limit_req per subnet?
December 13, 2016 11:10PM
I recently suffered DoS from a series of 10 sequential IP addresses.
limit_req would have dealt with the problem if a single IP address had
been used. Can it be made to work in a situation like this where a
series of sequential IP addresses are in play? Maybe per subnet?

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Anonymous User
Re: limit_req per subnet?
December 14, 2016 01:50AM
That attack wasn't very distributed. ;-)

Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide.  At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them.

If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. ‎ Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range.

I try "traffic management" at the firewall first. As I discovered with "deny" ‎in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway. 

  Original Message  
From: Grant
Sent: Tuesday, December 13, 2016 2:01 PM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: limit_req per subnet?

I recently suffered DoS from a series of 10 sequential IP addresses.
limit_req would have dealt with the problem if a single IP address had
been used. Can it be made to work in a situation like this where a
series of sequential IP addresses are in play? Maybe per subnet?

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
c0nw0nk
Re: limit_req per subnet?
December 14, 2016 08:40AM
I am curious what is the request uri they was hitting. Was it a dynamic page
or file or a static one.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271494#msg-271494

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 14, 2016 07:40PM
> Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them.


What sort of sites run into problems after doing that? I'm sure some
sites need to allow cloud services to access them. A startup search
engine could be run from such a service.


> If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. ‎ Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range.
>
> I try "traffic management" at the firewall first. As I discovered with "deny" ‎in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway.


It sounds like limit_req in nginx does not have any way to do this.
How would you accomplish this in fail2ban?

- Grant


> I recently suffered DoS from a series of 10 sequential IP addresses.
> limit_req would have dealt with the problem if a single IP address had
> been used. Can it be made to work in a situation like this where a
> series of sequential IP addresses are in play? Maybe per subnet?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 14, 2016 07:40PM
> I am curious what is the request uri they was hitting. Was it a dynamic page
> or file or a static one.


It was semrush and it was all manner of dynamic pages.

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Anonymous User
Re: limit_req per subnet?
December 14, 2016 08:20PM
‎They claim to obey robots.txt. They also claim to to use consecutive IP addresses.
https://www.semrush.com/bot/

Some dated posts (2011) indicate semrush uses AWS. I block all of AWS IP space and can say I've never seen a semrush bot. So that might be a solution. I got the AWS IP space from some Amazon Web page. 

I get a bit of kick back about blocking things that are not eyeballs like colos and VPS, but it works for me.  I only block after seeing a hacking attempt(s) in my logs.



  Original Message  
From: Grant
Sent: Wednesday, December 14, 2016 10:31 AM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: limit_req per subnet?

> I am curious what is the request uri they was hitting. Was it a dynamic page
> or file or a static one.


It was semrush and it was all manner of dynamic pages.

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Anonymous User
Re: limit_req per subnet?
December 14, 2016 08:50PM
I'm no fail2ban guru. Trust me. I'd suggest going on serverfault. But my other post indicates semrush resides on AWS, so just block AWS. I doubt there is any harm in blocking AWS since no major search engine uses them. 

Regarding search engines, the reality is only Google matters. Just look at your logs. That said, I allow Google, yahoo, and Bing. But yahoo/bing isn't even 5% of Google traffic. Everything else I block. Majestic (MJ12) is just ridiculous. I allow the anti-virus companies to poke around, though I can't figure out what exactly their probes accomplish. Often Intel/McAfee just pings the server, perhaps to survey hosting software and revision. Good advertising for nginx!




  Original Message  
From: Grant
Sent: Wednesday, December 14, 2016 10:30 AM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: limit_req per subnet?

> Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them.


What sort of sites run into problems after doing that? I'm sure some
sites need to allow cloud services to access them. A startup search
engine could be run from such a service.


> If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. ‎ Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range.
>
> I try "traffic management" at the firewall first. As I discovered with "deny" ‎in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway.


It sounds like limit_req in nginx does not have any way to do this.
How would you accomplish this in fail2ban?

- Grant


> I recently suffered DoS from a series of 10 sequential IP addresses.
> limit_req would have dealt with the problem if a single IP address had
> been used. Can it be made to work in a situation like this where a
> series of sequential IP addresses are in play? Maybe per subnet?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
shiz
Re: limit_req per subnet?
December 14, 2016 09:30PM
I rate limit them using the user-agent

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271524#msg-271524

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 14, 2016 11:00PM
> I'm no fail2ban guru. Trust me. I'd suggest going on serverfault. But my other post indicates semrush resides on AWS, so just block AWS. I doubt there is any harm in blocking AWS since no major search engine uses them.
>
> Regarding search engines, the reality is only Google matters. Just look at your logs. That said, I allow Google, yahoo, and Bing. But yahoo/bing isn't even 5% of Google traffic. Everything else I block. Majestic (MJ12) is just ridiculous. I allow the anti-virus companies to poke around, though I can't figure out what exactly their probes accomplish. Often Intel/McAfee just pings the server, perhaps to survey hosting software and revision. Good advertising for nginx!


I would really prefer not to block cloud services. It sounds like an
admin headache down the road.

nginx limit_req works great for a single IP attacker, but all it takes
is 3 IPs for an attacker to triple his allowable rate, even from
sequential IPs? I'm surprised there's no way to combat this.

- Grant


>> Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them.
>
>
> What sort of sites run into problems after doing that? I'm sure some
> sites need to allow cloud services to access them. A startup search
> engine could be run from such a service.
>
>
>> If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. ‎ Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range.
>>
>> I try "traffic management" at the firewall first. As I discovered with "deny" ‎in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway.
>
>
> It sounds like limit_req in nginx does not have any way to do this.
> How would you accomplish this in fail2ban?
>
>
>> I recently suffered DoS from a series of 10 sequential IP addresses.
>> limit_req would have dealt with the problem if a single IP address had
>> been used. Can it be made to work in a situation like this where a
>> series of sequential IP addresses are in play? Maybe per subnet?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 14, 2016 11:10PM
> I rate limit them using the user-agent


Maybe this is the best solution, although of course it doesn't rate
limit real attackers. Is there a good method for monitoring which UAs
request pages above a certain rate so I can write a limit for them?

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 14, 2016 11:20PM
>> I rate limit them using the user-agent
>
>
> Maybe this is the best solution, although of course it doesn't rate
> limit real attackers. Is there a good method for monitoring which UAs
> request pages above a certain rate so I can write a limit for them?


Actually, is there a way to limit rate by UA on the fly? If so, can I
do that and somehow avoid limiting multiple legitimate browsers with
the same UA?

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Anonymous User
Re: limit_req per subnet?
December 15, 2016 01:10AM
By the time you get to UA, nginx has done a lot of work. 

You could 444 based on UA, then read that code in the log file with fail2ban or a clever script. ‎That way you can block them at the firewall. It won't help immediately with the sequential number, but that really won't be a problem. 


  Original Message  
From: Grant
Sent: Wednesday, December 14, 2016 2:15 PM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: limit_req per subnet?

>> I rate limit them using the user-agent
>
>
> Maybe this is the best solution, although of course it doesn't rate
> limit real attackers. Is there a good method for monitoring which UAs
> request pages above a certain rate so I can write a limit for them?


Actually, is there a way to limit rate by UA on the fly? If so, can I
do that and somehow avoid limiting multiple legitimate browsers with
the same UA?

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
shiz
Re: limit_req per subnet?
December 15, 2016 02:30AM
I've inplemented something based on
https://community.centminmod.com/threads/blocking-bad-or-aggressive-bots.6433/

Works perfectly fine for me.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271535#msg-271535

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
c0nw0nk
Re: limit_req per subnet?
December 15, 2016 05:10AM
proxy_cache / fastcgi_cache the pages output will help. Flood all you want
Nginx handles flooding and lots of connections fine your back end is your
weakness / bottleneck that is allowing them to be successful in effecting
your service.

You could also use the secure_link module to help on your index.php or .html
what ever it is you have going on that generates the link they are
attacking, You can generate a unique hash that expires for that IP only.
There are allot of solutions.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271537#msg-271537

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Anonymous User
Re: limit_req per subnet?
December 15, 2016 09:20AM
This is an interesting bit of code. However if you are being ddos-ed, this just eliminates nginx from replying. It isn't like nginx is isolated from the attack. I would still rather block the IP at the firewall and prevent nginx fr‎om doing any action. 

The use of $bot_agent opens up a lot of possibilities of the value can be fed to the log file.
  Original Message  
From: shiz
Sent: Wednesday, December 14, 2016 5:24 PM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: limit_req per subnet?

I've inplemented something based on
https://community.centminmod.com/threads/blocking-bad-or-aggressive-bots.6433/

Works perfectly fine for me.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271535#msg-271535

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
c0nw0nk
Re: limit_req per subnet?
December 15, 2016 11:30AM
gariac Wrote:
-------------------------------------------------------
> This is an interesting bit of code. However if you are being ddos-ed,
> this just eliminates nginx from replying. It isn't like nginx is
> isolated from the attack. I would still rather block the IP at the
> firewall and prevent nginx fr‎om doing any action. 
>
> The use of $bot_agent opens up a lot of possibilities of the value can
> be fed to the log file.
>   Original Message  
> From: shiz
> Sent: Wednesday, December 14, 2016 5:24 PM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Re: limit_req per subnet?
>
> I've inplemented something based on
> https://community.centminmod.com/threads/blocking-bad-or-aggressive-bo
> ts.6433/
>
> Works perfectly fine for me.
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,271483,271535#msg-271535
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


Any layer 7 attack that Nginx begins struggling to accept connections is a
successful one and at that point should be blocked at a router level. But
Nginx handles allot of connections very well hence why the limit_conn and
limit_req modules exist because the majority of layer 7 attacks Nginx won't
have a problem denying them itself. The bottle necks are backend processes
like MySQL, PHP, Python, If they clog up accepting traffic Nginx will run
out of connections available to keep serving other requests for different
files / paths on the server.
http://nginx.org/en/docs/ngx_core_module.html#worker_connections that is the
cause to your entire Nginx server going slow / unresponsive at that point
even the 503 error and 500x errors won't display, all connections begin to
time out and at this point you should block those IP's exhausting Nginx's
server connections at a router level since Nginx can no longer cope.

Nginx has small footprint in resources used layer 7 based attacks you should
only start blocking at a router level when Nginx can no longer handle them
fine on its own and begins timing out due to worker_connections getting
exhausted. But it is rare that a attack is large enough to exhaust those and
you can increase worker_connections and decrease timeout values to fix that
easily.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271546#msg-271546

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Anonymous User
Re: limit_req per subnet?
December 16, 2016 12:10AM
Here is my philosophy. A packet arrives at your server. This can be broken down into two parts: who are you and what do you want. The firewall does a fine job of stopping the hacker at the who are you point. 

When the packet reaches Nginx, the what do you want part comes into play. Most likely nginx will reject it. But all software has bugs, and thus there will be zero days. Thus I rather stop the bad actor at the firewall.

  Original Message  
From: c0nw0nk
Sent: Thursday, December 15, 2016 2:23 AM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: limit_req per subnet?

gariac Wrote:
-------------------------------------------------------
> This is an interesting bit of code. However if you are being ddos-ed,
> this just eliminates nginx from replying. It isn't like nginx is
> isolated from the attack. I would still rather block the IP at the
> firewall and prevent nginx fr‎om doing any action. 
>
> The use of $bot_agent opens up a lot of possibilities of the value can
> be fed to the log file.
>   Original Message  
> From: shiz
> Sent: Wednesday, December 14, 2016 5:24 PM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Re: limit_req per subnet?
>
> I've inplemented something based on
> https://community.centminmod.com/threads/blocking-bad-or-aggressive-bo
> ts.6433/
>
> Works perfectly fine for me.
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,271483,271535#msg-271535
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


Any layer 7 attack that Nginx begins struggling to accept connections is a
successful one and at that point should be blocked at a router level. But
Nginx handles allot of connections very well hence why the limit_conn and
limit_req modules exist because the majority of layer 7 attacks Nginx won't
have a problem denying them itself. The bottle necks are backend processes
like MySQL, PHP, Python, If they clog up accepting traffic Nginx will run
out of connections available to keep serving other requests for different
files / paths on the server.
http://nginx.org/en/docs/ngx_core_module.html#worker_connections that is the
cause to your entire Nginx server going slow / unresponsive at that point
even the 503 error and 500x errors won't display, all connections begin to
time out and at this point you should block those IP's exhausting Nginx's
server connections at a router level since Nginx can no longer cope.

Nginx has small footprint in resources used layer 7 based attacks you should
only start blocking at a router level when Nginx can no longer handle them
fine on its own and begins timing out due to worker_connections getting
exhausted. But it is rare that a attack is large enough to exhaust those and
you can increase worker_connections and decrease timeout values to fix that
easily.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271546#msg-271546

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 16, 2016 01:00AM
> proxy_cache / fastcgi_cache the pages output will help. Flood all you want
> Nginx handles flooding and lots of connections fine your back end is your
> weakness / bottleneck that is allowing them to be successful in effecting
> your service.


Definitely. My backend is of course the bottleneck so I'd like nginx
to refrain from passing a request on to the backend if it is deemed to
be part of a group of requests that should be rate limited. But there
doesn't seem to be a good way to do that if the group should contain
more than one IP. I think any method that groups requests by UA will
require too much human monitoring.

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
c0nw0nk
Re: limit_req per subnet?
December 16, 2016 06:10AM
That is why you cache the request. DoS or in your case DDoS since multiple
are involved Caching backend responses and having Nginx serve a cached
response even for 1 second that cached response can be valid for it will
save your day.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271580#msg-271580

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 29, 2016 01:20AM
> That is why you cache the request. DoS or in your case DDoS since multiple
> are involved Caching backend responses and having Nginx serve a cached
> response even for 1 second that cached response can be valid for it will
> save your day.


That would be a big project because it would mean rewriting some of
the functionality of my backend. I'm looking for something that can
be implemented independently of the backend, but that doesn't seem to
exist in nginx.

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Francis Daly
Re: limit_req per subnet?
December 29, 2016 12:20PM
On Wed, Dec 28, 2016 at 04:16:06PM -0800, Grant wrote:

Hi there,

> I'm looking for something that can
> be implemented independently of the backend, but that doesn't seem to
> exist in nginx.

http://nginx.org/r/limit_req_zone

You can define the "key" any way that you want.

Perhaps you can create something using "geo". Perhaps you want "the first
three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4
address, rounded down to a multiple of 8". Perhaps you want something
else.

The exact thing that you want, probably does not exist.

The tools that are needed to create it, probably do exist.

All that seems to be missing is the incentive for someone to actually
do the work to build a thing that you would like to exist.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 29, 2016 05:20PM
>> I'm looking for something that can
>> be implemented independently of the backend, but that doesn't seem to
>> exist in nginx.
>
> http://nginx.org/r/limit_req_zone
>
> You can define the "key" any way that you want.
>
> Perhaps you can create something using "geo". Perhaps you want "the first
> three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4
> address, rounded down to a multiple of 8". Perhaps you want something
> else.


So I'm sure I understand, none of the functionality described above
exists currently?

- Grant


> The exact thing that you want, probably does not exist.
>
> The tools that are needed to create it, probably do exist.
>
> All that seems to be missing is the incentive for someone to actually
> do the work to build a thing that you would like to exist.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
December 30, 2016 01:40PM
>>> I'm looking for something that can
>>> be implemented independently of the backend, but that doesn't seem to
>>> exist in nginx.
>>
>> http://nginx.org/r/limit_req_zone
>>
>> You can define the "key" any way that you want.
>>
>> Perhaps you can create something using "geo". Perhaps you want "the first
>> three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4
>> address, rounded down to a multiple of 8". Perhaps you want something
>> else.
>
>
> So I'm sure I understand, none of the functionality described above
> exists currently?


Or can it be configured without hacking the nginx core?

- Grant


>> The exact thing that you want, probably does not exist.
>>
>> The tools that are needed to create it, probably do exist.
>>
>> All that seems to be missing is the incentive for someone to actually
>> do the work to build a thing that you would like to exist.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Francis Daly
Re: limit_req per subnet?
December 31, 2016 11:40AM
On Thu, Dec 29, 2016 at 08:09:33AM -0800, Grant wrote:

Hi there,

> >> I'm looking for something that can
> >> be implemented independently of the backend, but that doesn't seem to
> >> exist in nginx.
> >
> > http://nginx.org/r/limit_req_zone
> >
> > You can define the "key" any way that you want.
> >
> > Perhaps you can create something using "geo". Perhaps you want "the first
> > three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4
> > address, rounded down to a multiple of 8". Perhaps you want something
> > else.
>
>
> So I'm sure I understand, none of the functionality described above
> exists currently?

A variable with exactly the value that you want it to have, probably
does not exist currently in the stock nginx code.

The code that allows you to create a variable with exactly the value
that you want it to have, probably does exist in the stock nginx code.

You can use "geo", "map", "set", or (probably) any of the extension
languages to give the variable the value that you want it to have.

For example:

map $binary_remote_addr $bin_slash16 {
"~^(?P<a>..)..$" "$a";
}

will probably come close to making $bin_slash16 hold a binary
representation of the first two octets of the connecting ip address.

(You'll want to confirm whether "dot" matches "any byte" in your regex
engine; or whether you can make it match "any byte" (specifically
including the byte that normally represents newline); before you trust
that fully, of course.)

If you don't like map with regex, you can use "geo" with a (long) list
of networks, to set your new variable to whatever value you want.

Good luck with it,

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Grant
Re: limit_req per subnet?
January 02, 2017 04:50PM
>> >> I'm looking for something that can
>> >> be implemented independently of the backend, but that doesn't seem to
>> >> exist in nginx.
>> >
>> > http://nginx.org/r/limit_req_zone
>> >
>> > You can define the "key" any way that you want.
>> >
>> > Perhaps you can create something using "geo". Perhaps you want "the first
>> > three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4
>> > address, rounded down to a multiple of 8". Perhaps you want something
>> > else.
>>
>>
>> So I'm sure I understand, none of the functionality described above
>> exists currently?
>
> A variable with exactly the value that you want it to have, probably
> does not exist currently in the stock nginx code.
>
> The code that allows you to create a variable with exactly the value
> that you want it to have, probably does exist in the stock nginx code.
>
> You can use "geo", "map", "set", or (probably) any of the extension
> languages to give the variable the value that you want it to have.
>
> For example:
>
> map $binary_remote_addr $bin_slash16 {
> "~^(?P<a>..)..$" "$a";
> }
>
> will probably come close to making $bin_slash16 hold a binary
> representation of the first two octets of the connecting ip address.
>
> (You'll want to confirm whether "dot" matches "any byte" in your regex
> engine; or whether you can make it match "any byte" (specifically
> including the byte that normally represents newline); before you trust
> that fully, of course.)


That sounds like a good solution. Will using map along with a regex
slow the server down much?

- Grant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Francis Daly
Re: limit_req per subnet?
January 04, 2017 07:40PM
On Mon, Jan 02, 2017 at 07:43:38AM -0800, Grant wrote:

Hi there,

> > For example:
> >
> > map $binary_remote_addr $bin_slash16 {
> > "~^(?P<a>..)..$" "$a";
> > }
> >
> > will probably come close to making $bin_slash16 hold a binary
> > representation of the first two octets of the connecting ip address.

> That sounds like a good solution. Will using map along with a regex
> slow the server down much?

The usual rule is that if you do not measure the slow-down on your test
system, then there is not a significant slow-down for your use cases.

Good luck with it,

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Sorry, only registered users may post in this forum.

Click here to login