Welcome! Log In Create A New Profile

Advanced

[Working update]: Request rate limiting on the backend section

Posted by Krishna Kumar (Engineering) 
Krishna Kumar (Engineering)
[Working update]: Request rate limiting on the backend section
November 08, 2017 10:50AM
I finally got the backend rate limiting working pretty well. Here is the
configuration settings in case it helps anyone else do the same:

frontend http-fe
bind <ip>
default_backend http-be

backend http-be
http-request track-sc2 fe_id
stick-table type integer size 1k expire 30s store
http_req_rate(1s),gpc0,gpc0_rate(1s)
acl within_limit sc2_gpc0_rate() le 1000
acl increment_gpc0 sc2_inc_gpc0 ge 0
http-request allow if within_limit increment_gpc0
http-request deny deny_status 429 if !within_limit
server my-server <ip>

During the test, the stick table contents were:
0x16e593c: key=3 use=98 exp=29999 gpc0=44622 gpc0_rate(1000)=1000
http_req_rate(1000)=69326

Test results:
# wrk -t 40 -c 4000 -d 30s <ip>
RPS: 1003.05 (Total requests: 2031922 Good: 30192 Errors: 2001730 Time:
30.10)

Margin of error: 0.3%

Thanks,
- Krishna


On Wed, Nov 8, 2017 at 10:02 AM, Krishna Kumar (Engineering) <
[email protected]> wrote:

> On Tue, Nov 7, 2017 at 11:57 PM, Lukas Tribus <[email protected]> wrote:
>
> Hi Lukas,
>
> > Yes, in 1.7 you can change server maxconn values in real time using
> > the admin socket:
> > https://cbonte.github.io/haproxy-dconv/1.7/management.html
> #9.3-set%20maxconn%20server
>
> Thanks, will take a look at if we can use this. The only issue is that we
> want to
> be able to change rps very often, and some backend sections contain upto
> 500 servers (and much more during sales), and doing that on the fly at high
> frequency may not scale.
>
> > You are reluctant to elaborate on the bigger picture, so I guess
> > generic advice is not what you are looking for. I just hope you are
> > not trying to build some kind of distributed rate-limiting
> > functionality with this.
>
> Sorry, not reluctance, I thought giving too much detail would put off
> people
> from taking a look :) So you are right, we are trying to build a
> distributed rate
> limiting feature, and the control plane is mostly ready (thanks to HAProxy
> developers for making such a performant/configurable system). The service
> monitors current http_request_rate and current RPS setting via uds every
> second, and updates these values to a central repository (zookeeper), and
> on demand, tries to increase capacity by requesting capacity from other
> servers so as to keep capacity constant at the configured value (e.g. 1000
> RPS). Is this something you would not recommend?
>
> > I don't have enough experience with stick-tables to comment on this
> > generally, but I would suggest you upgrade to a current 1.7 release
> > first of all and retry your tests. There are currently 223 bugs fixed
> > in releases AFTER 1.6.3:
> > http://www.haproxy.org/bugs/bugs-1.6.3.html
>
> Thanks, we are considering moving to this version.
>
> > Maybe someone more stick-table savvy can comment on your specific
> question.
>
> If anyone else has done something similar, would really like to hear from
> you on
> how to control RPS in the backend.
>
> Regards,
> - Krishna
>
>
Krishna Kumar (Engineering)
Re: [Working update]: Request rate limiting on the backend section
November 08, 2017 11:30AM
To remove the reported "margin of error", the config needed
a fix:

acl within_limit sc2_gpc0_rate() lt 1000

since the first request was at rate==0, and last one is at 999.


On Wed, Nov 8, 2017 at 3:11 PM, Krishna Kumar (Engineering) <
[email protected]> wrote:

> I finally got the backend rate limiting working pretty well. Here is the
> configuration settings in case it helps anyone else do the same:
>
> frontend http-fe
> bind <ip>
> default_backend http-be
>
> backend http-be
> http-request track-sc2 fe_id
> stick-table type integer size 1k expire 30s store
> http_req_rate(1s),gpc0,gpc0_rate(1s)
> acl within_limit sc2_gpc0_rate() le 1000
> acl increment_gpc0 sc2_inc_gpc0 ge 0
> http-request allow if within_limit increment_gpc0
> http-request deny deny_status 429 if !within_limit
> server my-server <ip>
>
> During the test, the stick table contents were:
> 0x16e593c: key=3 use=98 exp=29999 gpc0=44622 gpc0_rate(1000)=1000
> http_req_rate(1000)=69326
>
> Test results:
> # wrk -t 40 -c 4000 -d 30s <ip>
> RPS: 1003.05 (Total requests: 2031922 Good: 30192 Errors: 2001730 Time:
> 30.10)
>
> Margin of error: 0.3%
>
> Thanks,
> - Krishna
>
>
> On Wed, Nov 8, 2017 at 10:02 AM, Krishna Kumar (Engineering) <
> [email protected]> wrote:
>
>> On Tue, Nov 7, 2017 at 11:57 PM, Lukas Tribus <[email protected]> wrote:
>>
>> Hi Lukas,
>>
>> > Yes, in 1.7 you can change server maxconn values in real time using
>> > the admin socket:
>> > https://cbonte.github.io/haproxy-dconv/1.7/management.html
>> #9.3-set%20maxconn%20server
>>
>> Thanks, will take a look at if we can use this. The only issue is that we
>> want to
>> be able to change rps very often, and some backend sections contain upto
>> 500 servers (and much more during sales), and doing that on the fly at
>> high
>> frequency may not scale.
>>
>> > You are reluctant to elaborate on the bigger picture, so I guess
>> > generic advice is not what you are looking for. I just hope you are
>> > not trying to build some kind of distributed rate-limiting
>> > functionality with this.
>>
>> Sorry, not reluctance, I thought giving too much detail would put off
>> people
>> from taking a look :) So you are right, we are trying to build a
>> distributed rate
>> limiting feature, and the control plane is mostly ready (thanks to HAProxy
>> developers for making such a performant/configurable system). The service
>> monitors current http_request_rate and current RPS setting via uds every
>> second, and updates these values to a central repository (zookeeper), and
>> on demand, tries to increase capacity by requesting capacity from other
>> servers so as to keep capacity constant at the configured value (e.g. 1000
>> RPS). Is this something you would not recommend?
>>
>> > I don't have enough experience with stick-tables to comment on this
>> > generally, but I would suggest you upgrade to a current 1.7 release
>> > first of all and retry your tests. There are currently 223 bugs fixed
>> > in releases AFTER 1.6.3:
>> > http://www.haproxy.org/bugs/bugs-1.6.3.html
>>
>> Thanks, we are considering moving to this version.
>>
>> > Maybe someone more stick-table savvy can comment on your specific
>> question.
>>
>> If anyone else has done something similar, would really like to hear from
>> you on
>> how to control RPS in the backend.
>>
>> Regards,
>> - Krishna
>>
>>
>
Sorry, only registered users may post in this forum.

Click here to login