Welcome! Log In Create A New Profile

Advanced

core.sleep() ignores param and sleeps for 10 seconds in response action

Posted by Nick Dimov 
Hello, everyone.

I am encountering a problem with LUA in haproxy, I also reported it here
https://github.com/sflow/haproxy/issues/2 but the problem is lieke this:

When using a response action, this function - sleeps for 10 seconds, no
matter what param i pass to it. Also it seems that the wait time always
equals timeout connect. The sample config is:

global
daemon
log /dev/log local6
lua-load /etc/haproxy/delay.lua

defaults
mode http
timeout connect 10000ms

frontend fe
bind *:80
mode http
default_backend b_http_hosts

backend b_http_hosts
mode http
http-response lua.delay_response
server s_web1 server:80 check

and the LUA code:

function delay_response(txn)
core.msleep(1)
end

core.register_action("delay_response", {"tcp-res", "http-res" },
delay_response);

Note that if core.msleep() is commented out - everything works as expected.

I tested version 1.6 (it hangs 30 seconds there), 1.7 - matches timeout
connect, and 1.8 - same as 1.7.

Any idea how to overcome this problem? All i need is to delay the
responses based on information from backend header.

Reegards!
Hello Nick,

On Fri, Nov 10, 2017 at 04:50:37PM +0200, Nick Dimov wrote:
> Hello, everyone.
>
> I am encountering a problem with LUA in haproxy, I also reported it here
> https://github.com/sflow/haproxy/issues/2 but the problem is lieke this:
>
> When using a response action, this function - sleeps for 10 seconds, no
> matter what param i pass to it. Also it seems that the wait time always
> equals timeout connect. The sample config is:
>
> global
> daemon
> log /dev/log local6
> lua-load /etc/haproxy/delay.lua
>
> defaults
> mode http
> timeout connect 10000ms
>
> frontend fe
> bind *:80
> mode http
> default_backend b_http_hosts
>
> backend b_http_hosts
> mode http
> http-response lua.delay_response
> server s_web1 server:80 check
>
> and the LUA code:
>
> function delay_response(txn)
> core.msleep(1)
> end
>
> core.register_action("delay_response", {"tcp-res", "http-res" },
> delay_response);
>
> Note that if core.msleep() is commented out - everything works as expected.
>
> I tested version 1.6 (it hangs 30 seconds there), 1.7 - matches timeout
> connect, and 1.8 - same as 1.7.
>
> Any idea how to overcome this problem? All i need is to delay the
> responses based on information from backend header.

I've checked and in fact it's been like this forever, it's just that Lua
uncovered it :-) Basically the response analyse timeout was never handled
in process_stream().

I've just fixed it upstream now and verified that your example above
correctly pauses for delays smaller than the connect timeout.

You can apply the attached patch, it should work on 1.7 as well.

Thanks for reporting this!
Willy
From 9a398beac321fdda9f6cf0cb7069960d1a29cfd6 Mon Sep 17 00:00:00 2001
From: Willy Tarreau <[email protected]>
Date: Fri, 10 Nov 2017 17:14:23 +0100
Subject: BUG/MEDIUM: stream: don't ignore res.analyse_exp anymore

It happens that no single analyser has ever needed to set res.analyse_exp,
so that process_stream() didn't consider it when computing the next task
expiration date. Since Lua actions were introduced in 1.6, this can be
needed on http-response actions for example, so let's ensure it's properly
handled.

Thanks to Nick Dimov for reporting this bug. The fix needs to be
backported to 1.7 and 1.6.
---
src/stream.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/src/stream.c b/src/stream.c
index 48c4ba5..19d94f3 100644
--- a/src/stream.c
+++ b/src/stream.c
@@ -2427,6 +2427,8 @@ struct task *process_stream(struct task *t)

t->expire = tick_first(t->expire, req->analyse_exp);

+ t->expire = tick_first(t->expire, res->analyse_exp);
+
if (si_f->exp)
t->expire = tick_first(t->expire, si_f->exp);

--
1.7.12.1
Hello,

The patch works great. I tested on 1.8 and 1.7 and both are working good
now.

Thanks a lot!


On 10.11.2017 18:20, Willy Tarreau wrote:
> Hello Nick,
>
> On Fri, Nov 10, 2017 at 04:50:37PM +0200, Nick Dimov wrote:
>> Hello, everyone.
>>
>> I am encountering a problem with LUA in haproxy, I also reported it here
>> https://github.com/sflow/haproxy/issues/2 but the problem is lieke this:
>>
>> When using a response action, this function - sleeps for 10 seconds, no
>> matter what param i pass to it. Also it seems that the wait time always
>> equals timeout connect. The sample config is:
>>
>> global
>> daemon
>> log /dev/log local6
>> lua-load /etc/haproxy/delay.lua
>>
>> defaults
>> mode http
>> timeout connect 10000ms
>>
>> frontend fe
>> bind *:80
>> mode http
>> default_backend b_http_hosts
>>
>> backend b_http_hosts
>> mode http
>> http-response lua.delay_response
>> server s_web1 server:80 check
>>
>> and the LUA code:
>>
>> function delay_response(txn)
>> core.msleep(1)
>> end
>>
>> core.register_action("delay_response", {"tcp-res", "http-res" },
>> delay_response);
>>
>> Note that if core.msleep() is commented out - everything works as expected.
>>
>> I tested version 1.6 (it hangs 30 seconds there), 1.7 - matches timeout
>> connect, and 1.8 - same as 1.7.
>>
>> Any idea how to overcome this problem? All i need is to delay the
>> responses based on information from backend header.
> I've checked and in fact it's been like this forever, it's just that Lua
> uncovered it :-) Basically the response analyse timeout was never handled
> in process_stream().
>
> I've just fixed it upstream now and verified that your example above
> correctly pauses for delays smaller than the connect timeout.
>
> You can apply the attached patch, it should work on 1.7 as well.
>
> Thanks for reporting this!
> Willy
On Fri, Nov 10, 2017 at 06:43:42PM +0200, Nick Dimov wrote:
> Hello,
>
> The patch works great. I tested on 1.8 and 1.7 and both are working good
> now.

thanks for confirming!

Willy
Hi again,

Since I'm still on this, I'm having another possible issue with haproxy.
The documentation states:

> When a server has a "maxconn<https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#>"; parameter specified, it means that its number
> of concurrent connections *will never go higher*.
But in my experiments, the haproxy creates more connections to the
backend than specified in maxconn.

For example, I use a backend like this

> backend b_http_hosts
>     mode http
>
>     server s_web1  server:80 maxconn 10 check

And I'm testing with wrk -c 20 and checking the established connections
from haproxy to apache (with netstat), and that number is always equal
to what i specify in -c param for wrk.

It's worth mentioning that it respects the number of simultaneous
requests (i checked that specifically) but is this behavior normal? Is
it ok that it creates more connections than specified in maxconn ?

Regards,

Nick.



On 10.11.2017 18:57, Willy Tarreau wrote:
> On Fri, Nov 10, 2017 at 06:43:42PM +0200, Nick Dimov wrote:
>> Hello,
>>
>> The patch works great. I tested on 1.8 and 1.7 and both are working good
>> now.
> thanks for confirming!
>
> Willy
On Fri, Nov 10, 2017 at 07:05:39PM +0200, Nick Dimov wrote:
> For example, I use a backend like this
>
> > backend b_http_hosts
> >     mode http
> >
> >     server s_web1  server:80 maxconn 10 check
>
> And I'm testing with wrk -c 20 and checking the established connections
> from haproxy to apache (with netstat), and that number is always equal
> to what i specify in -c param for wrk.
>
> It's worth mentioning that it respects the number of simultaneous
> requests (i checked that specifically) but is this behavior normal? Is
> it ok that it creates more connections than specified in maxconn ?

Yes, it's expected as soon as you have enabled keep-alive on the backend.
Counting the number of idle connections in the limit would be counter-
productive and even prevent new requests from being served while some
idle connections remain. In the end, since most servers have accept queues
and dispatch requests to threads, it makes more sense to only count what
really matters, ie outstanding requests. However if you enable connection
sharing using "http-reuse", you'll see that the number of outstanding
requests is much closer to the number of connections.

Regards
Willy
thank you for the explanation.


On 10.11.2017 19:28, Willy Tarreau wrote:
> On Fri, Nov 10, 2017 at 07:05:39PM +0200, Nick Dimov wrote:
>> For example, I use a backend like this
>>
>>> backend b_http_hosts
>>>     mode http
>>>
>>>     server s_web1  server:80 maxconn 10 check
>> And I'm testing with wrk -c 20 and checking the established connections
>> from haproxy to apache (with netstat), and that number is always equal
>> to what i specify in -c param for wrk.
>>
>> It's worth mentioning that it respects the number of simultaneous
>> requests (i checked that specifically) but is this behavior normal? Is
>> it ok that it creates more connections than specified in maxconn ?
> Yes, it's expected as soon as you have enabled keep-alive on the backend.
> Counting the number of idle connections in the limit would be counter-
> productive and even prevent new requests from being served while some
> idle connections remain. In the end, since most servers have accept queues
> and dispatch requests to threads, it makes more sense to only count what
> really matters, ie outstanding requests. However if you enable connection
> sharing using "http-reuse", you'll see that the number of outstanding
> requests is much closer to the number of connections.
>
> Regards
> Willy
Sorry, only registered users may post in this forum.

Click here to login