Discussion:
[Pgbouncer-general] Better handling of idle in transaction sessions (?)
Gustavo R. Montesino
2016-06-06 11:32:15 UTC
Permalink
Hello,

WeÂŽre having some trouble with a postgresql / pgbouncer setup due to the
huge amount of idle in transaction sessions kept by the application: If we
set a smallish (proper) pool the server connections are filled by idle
transactions and the app pretty much dies, if we set a bigger pool
sometimes we get too many active sessions on database and response time
goes too high.

While I think thatÂŽs an application-side problema and should be fixed
there, I also believe that it would be possible for pgbouncer to allow some
alternative handling of idle transactions that would allow to better tune
and use db server resources... To that end, IÂŽve been thinking on something
along these lines:

- Add a new pool mode (just to keep current ones behaviour unchanged);
- Add a new client list for idle in transaction sessions;
- Add a new server list/pool/queue to handle re-activated idle in
transaction sessions;
- Add a new config to allow setting the max ammount of idle in transaction
sessions to run in parallel;
- When a session goes idle in transaction, take the client/server out of
the regular list/queue/pool and allow some other new/waiting client to get
a new server connection;
- Whan a idle in transaction session wakes up enqueue/run it according to
the new setting / server pool

The idea would be that pool size could be tunned to the server capacity,
without worries about the idle transactions, and those would also get their
exclusive "pool" so that they get to run when needed; at first I thought
they could just get on the regular queue but I guess that could generate
deadlocking if the idle transactions hold locks needed by all active ones.

As I donÂŽt really know pgbouncer (or postgresql) internals, I might be
missing something; does that sound reasonable / doable?


Thanks and regards,

Gustavo R. Montesino
Greg Sabino Mullane
2016-06-06 14:09:48 UTC
Permalink
While I think thatŽs an application-side problema and should be fixed
there, I also believe that it would be possible for pgbouncer to allow some
alternative handling of idle transactions that would allow to better tune
and use db server resources... To that end, IŽve been thinking on something
...

That seems like an awful lot of trouble to work around a broken application.

I'm not even sure that would work: if the connection is idle in transaction,
then there is always a Postgres backend associated with it, and nothing
else can use that connection other than the existing client [1], so you
would still run into the same problems you already have.

If the number of IIT is causing max_connections to fill up, some solutions
are to reduce the number of IIT by fixing the app (best), or boost max_connections.
If most of the connections are IIT, then boosting it might not have too much
of a negative impact. If there are a certain class of application more prone
to IIT, you could create a separate pgbouncer pool for them.

At the end of the day, however, you really need to get the IITs reigned in,
as they have other ill effects besides making your pgbouncer pools
fill up.


[1] Notwithstanding the recent idea proposed in another thread to allow suspending
transactions and have them picked up by a different client, but that would not
help the issue at hand.
--
Greg Sabino Mullane ***@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8
Merlin Moncure
2016-06-06 14:26:51 UTC
Permalink
Post by Greg Sabino Mullane
While I think that´s an application-side problema and should be fixed
there, I also believe that it would be possible for pgbouncer to allow some
alternative handling of idle transactions that would allow to better tune
and use db server resources... To that end, I´ve been thinking on something
...
That seems like an awful lot of trouble to work around a broken application.
I'm not even sure that would work: if the connection is idle in transaction,
then there is always a Postgres backend associated with it, and nothing
else can use that connection other than the existing client [1], so you
would still run into the same problems you already have.
If the number of IIT is causing max_connections to fill up, some solutions
are to reduce the number of IIT by fixing the app (best), or boost max_connections.
If most of the connections are IIT, then boosting it might not have too much
of a negative impact. If there are a certain class of application more prone
to IIT, you could create a separate pgbouncer pool for them.
At the end of the day, however, you really need to get the IITs reigned in,
as they have other ill effects besides making your pgbouncer pools
fill up.
[1] Notwithstanding the recent idea proposed in another thread to allow suspending
transactions and have them picked up by a different client, but that would not
help the issue at hand.
Currently it's a pretty good idea to run a script on cron that kills
all IIT connections that have been idle longer for, say, an hour.
Recently there was extensive discussion on hackers about adding
settings to postgresql.conf to make dealing with these nasty things
earlier. IIRC it resulted in a patch -- you may want to look. I
don't think pgbouncer is the right place to deal with this problem.

merlin
Gustavo R. Montesino
2016-06-06 14:38:55 UTC
Permalink
Hello Merlin,

On Mon, Jun 6, 2016 at 11:26 AM Merlin Moncure <***@gmail.com> wrote:

...
Post by Merlin Moncure
Currently it's a pretty good idea to run a script on cron that kills
all IIT connections that have been idle longer for, say, an hour.
Recently there was extensive discussion on hackers about adding
settings to postgresql.conf to make dealing with these nasty things
earlier. IIRC it resulted in a patch -- you may want to look. I
don't think pgbouncer is the right place to deal with this problem.
WeÂŽve already set "idle_transaction_timeout" at pgbouncer, however itÂŽs
still
not enough to solve the problem and we canÂŽt lower it more without affecting
application functionality (or at least didnÂŽt get a sinalization from app
people to
lower it).

Regards,

Gustavo R. Montesino
Gustavo R. Montesino
2016-06-06 14:33:30 UTC
Permalink
Hello Greg,
Post by Gustavo R. Montesino
Post by Gustavo R. Montesino
While I think thatÂŽs an application-side problema and should be fixed
there, I also believe that it would be possible for pgbouncer to allow
some
Post by Gustavo R. Montesino
alternative handling of idle transactions that would allow to better tune
and use db server resources... To that end, IÂŽve been thinking on
something
...
That seems like an awful lot of trouble to work around a broken application.
Agreed, but unfortunately fixing things in the right place is hard for
non-technical reasons (as always....)
Post by Gustavo R. Montesino
I'm not even sure that would work: if the connection is idle in transaction,
then there is always a Postgres backend associated with it, and nothing
else can use that connection other than the existing client [1], so you
would still run into the same problems you already have.
My original idea at least would be to let the IIT "sleeping" and open new
connections
as needed to process actives up to pool_size, so it would indeed increase
the amount of
sessions opened on the server as we wouldÂŽve "x" IIT + pool_size sessions,
however as far
as the number of the active connection number is kept under control I think
our server could
manage giving the needed answers in a reasonable time. It would be
acomplished by having
the queue to control the number of IIT that get "awaken".

If the number of IIT is causing max_connections to fill up, some solutions
Post by Gustavo R. Montesino
are to reduce the number of IIT by fixing the app (best), or boost max_connections.
If most of the connections are IIT, then boosting it might not have too much
of a negative impact. If there are a certain class of application more prone
to IIT, you could create a separate pgbouncer pool for them.
Unfortunately the number of IIT isnÂŽt constant, a big pool gets the server
overwhelmed
with active sessions at times (which is the current config BTW). The
server is dedicated
to a single application so different pools wonÂŽt solve it.
Post by Gustavo R. Montesino
At the end of the day, however, you really need to get the IITs reigned in,
as they have other ill effects besides making your pgbouncer pools
fill up.
Agreed again it would be better, but itÂŽs a harder battle to fight.

If something along the lines of my first e-mail gets implemented, it would
have any
chance of getting integrated on pgbouncer?
Post by Gustavo R. Montesino
[1] Notwithstanding the recent idea proposed in another thread to allow suspending
transactions and have them picked up by a different client, but that would not
help the issue at hand.
--
End Point Corporation
PGP Key: 0x14964AC8
Greg Sabino Mullane
2016-06-16 16:15:18 UTC
Permalink
Post by Gustavo R. Montesino
Agreed, but unfortunately fixing things in the right place is hard for
non-technical reasons (as always....)
:)
Post by Gustavo R. Montesino
My original idea at least would be to let the IIT "sleeping" and open new
connections as needed to process actives up to pool_size, so it would indeed
increase the amount of sessions opened on the server as we wouldŽve "x" IIT + pool_size sessions,
however as far as the number of the active connection number is kept under control I think
our server could manage giving the needed answers in a reasonable time. It would be
acomplished by having the queue to control the number of IIT that get "awaken".
I'm still not sure I'm following. How is this any better than simply boosting
max_connections? If max_connections is 10, let's say, and 7 applications go
IIT, there are three workers left that an do real work. It doesn't matter
too much if pgbouncer is involved or not. Those 7 tie up seven slots no matter what.
Neither Postgres nor Pgbouncer can let anyone else into those slots until the
transaction ends (pgbouncer) or the session ends (postgres). Perhaps I am
misunderstanding your idea though?
Post by Gustavo R. Montesino
If something along the lines of my first e-mail gets implemented, it would
have any chance of getting integrated on pgbouncer?
Sure. I'm not a committer, but a decent, well-documented patch should always
be accepted.
--
Greg Sabino Mullane ***@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8
Gustavo R. Montesino
2016-06-17 10:45:12 UTC
Permalink
Post by Gustavo R. Montesino
Post by Gustavo R. Montesino
Agreed, but unfortunately fixing things in the right place is hard for
non-technical reasons (as always....)
:)
Post by Gustavo R. Montesino
My original idea at least would be to let the IIT "sleeping" and open new
connections as needed to process actives up to pool_size, so it would
indeed
Post by Gustavo R. Montesino
increase the amount of sessions opened on the server as we wouldÂŽve "x"
IIT + pool_size sessions,
Post by Gustavo R. Montesino
however as far as the number of the active connection number is kept
under control I think
Post by Gustavo R. Montesino
our server could manage giving the needed answers in a reasonable time.
It would be
Post by Gustavo R. Montesino
acomplished by having the queue to control the number of IIT that get
"awaken".
I'm still not sure I'm following. How is this any better than simply boosting
max_connections? If max_connections is 10, let's say, and 7 applications go
IIT, there are three workers left that an do real work. It doesn't matter
too much if pgbouncer is involved or not. Those 7 tie up seven slots no matter what.
Neither Postgres nor Pgbouncer can let anyone else into those slots until the
transaction ends (pgbouncer) or the session ends (postgres). Perhaps I am
misunderstanding your idea though?
Let me try to get a step back and explain the problem better:

We had a DB server which, in our experience, worked best with around 30
active sessions, above
that the response time of the queries goes up above the times we would
like; above say 60-80 it
gets bad enough to be unusable. It hosts the db of a single application,
and we use bouncer on
transaction mode to try keeping the number of simultaneous active on db
side under control (we
do have more sessions than that on peak hours and do need some qeueing).

I've talked with some guys who know the application better and they where
explaining to me
that the application has some conversation concept which seems to mean that
it's
expected that it keeps open transactions while it waits for user input. So
it basically means
we have unpredictable amounts of idle in transactions for unpredictable
amounts of
time as it all dependes on user action.

After some monitoring, we have come to conclusion that we could wait peaks
of, let's say,
around 180 IIT, so we set our bouncer pool to 200 connections. That works
like a charm
when IIT is around what we expect, we get like 180 IIT, 20 actives (plus
the IIT which
awakens) and the rest gets queued up.

However the problem comes when IIT numbers are different. Sometimes we get
"only"
60-80 IIT, that allows more than 100 simultaenous active sessions and the
server can't
handle it well.

So the concept I tried to get on with the proposed solution would be to
give total
control on the amount of connections allowed to execute at the same time,
like saying: I want at most 30 active connection "slots", 10 reserved for
new
transactions and 20 to process old waiting transactions when they wake up.

Hope it's clear now. Honestly I had been expecting this to be a somewhat
common
problem, wonder if I had been mistaken about that.

Regards,

Gustavo R. Montesino
Greg Sabino Mullane
2016-06-17 15:42:44 UTC
Permalink
Thank you: that does help.

...
Post by Gustavo R. Montesino
Hope it's clear now. Honestly I had been expecting this to be a somewhat
common problem, wonder if I had been mistaken about that.
Well, idle in transaction is a common problem, but it's generally agreed
that pgbouncer is not the level at which to fix such problems. Keeping a
transaction open while waiting for client input is a fairly big application
design flaw. I know I keep hammering on that and I know it is out of your
hands, but that's a problem that has many application-level solutions, and
that's why nobody else is asking for a pgbouncer-based solution. :)

Still, it is a genuine problem for you, so I don't feel anyone will stand
in your way if you want to create a pgbouncer patch.
--
Greg Sabino Mullane ***@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8
Merlin Moncure
2016-06-17 16:09:52 UTC
Permalink
Post by Greg Sabino Mullane
Thank you: that does help.
...
Post by Gustavo R. Montesino
Hope it's clear now. Honestly I had been expecting this to be a somewhat
common problem, wonder if I had been mistaken about that.
Well, idle in transaction is a common problem, but it's generally agreed
that pgbouncer is not the level at which to fix such problems. Keeping a
transaction open while waiting for client input is a fairly big application
design flaw. I know I keep hammering on that and I know it is out of your
hands, but that's a problem that has many application-level solutions, and
that's why nobody else is asking for a pgbouncer-based solution. :)
Still, it is a genuine problem for you, so I don't feel anyone will stand
in your way if you want to create a pgbouncer patch.
This really belongs in the server. Certainly it might be easier to
push a patch into pgbouncer, but before doing so at least review the
-hackers thread, "Request: pg_cancel_backend variant that handles
'idle in transaction' sessions".

merlin
Greg Sabino Mullane
2016-06-17 19:05:02 UTC
Permalink
Post by Merlin Moncure
This really belongs in the server. Certainly it might be easier to
push a patch into pgbouncer, but before doing so at least review the
-hackers thread, "Request: pg_cancel_backend variant that handles
'idle in transaction' sessions".
I don't know that the OP's exact problem pops up in that thread, which
is all about forcefully ending transactions, but the OP does *not*
want those tranactions to get cancelled, IIUC.
--
Greg Sabino Mullane ***@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8
Gustavo R. Montesino
2016-06-18 10:40:50 UTC
Permalink
Post by Greg Sabino Mullane
Post by Merlin Moncure
This really belongs in the server. Certainly it might be easier to
push a patch into pgbouncer, but before doing so at least review the
-hackers thread, "Request: pg_cancel_backend variant that handles
'idle in transaction' sessions".
Thanks for the pointer, that's a very interesting thread. However...
Post by Greg Sabino Mullane
I don't know that the OP's exact problem pops up in that thread, which
is all about forcefully ending transactions, but the OP does *not*
want those tranactions to get cancelled, IIUC.
Greg nailed it, I can't cancel these transactions as in our (broken)
application, They can be a legitimate wait for some user input. FWIW
we're already using pgbouncer's idle_transaction_timeout to try to weed
out lost sessions, but the time we can set here isn't enough to keep
the number of sessions under control.


Regards,

Gustavo R. Montesino
Gustavo R. Montesino
2016-06-18 10:07:16 UTC
Permalink
Well, I think my work in progress with shared_pool can be a good basis
for what you want.
I'm not sure it's a so good idea for fixing your problem in the long
term, looks like you really have a broken application design.
Back to share_pool, current feature allows connection (client/server) to
seat on distinct pending list so that one can recover the connection of
someone else, *even if it's idle-in-transaction*. I think it's achieving
what you want, but app must manage that (and there is an extra cost for
pgbouncer in finding the relevant client<->server connection).
I wonder, while the concept sounds interesting I think adapting the
application to use that might be as hard as fixing it not to generate
the IITs in the first place (would require changes in the same places
I think, plus some "reconnect" handling).

It would also make the application dependent on bouncer. While
that's a pretty light dependency this is very unlikely to happen
unfortunately.
Also, it might be possible to do what you want. I think you just need a
new SV_IIT and associated list, everything else should work the same.
I've been toying around with it a bit, if I ever get enough time to make a
full
working patch I'll post it here.


Thanks and regards,

Gustavo R. Montesino
Stuart Bishop
2016-09-07 08:33:43 UTC
Permalink
Post by Gustavo R. Montesino
We had a DB server which, in our experience, worked best with around 30
active sessions, above
that the response time of the queries goes up above the times we would
like; above say 60-80 it
gets bad enough to be unusable. It hosts the db of a single application,
and we use bouncer on
transaction mode to try keeping the number of simultaneous active on db
side under control (we
do have more sessions than that on peak hours and do need some qeueing).
I've talked with some guys who know the application better and they where
explaining to me
that the application has some conversation concept which seems to mean
that it's
expected that it keeps open transactions while it waits for user input. So
it basically means
we have unpredictable amounts of idle in transactions for unpredictable
amounts of
time as it all dependes on user action.
After some monitoring, we have come to conclusion that we could wait peaks
of, let's say,
around 180 IIT, so we set our bouncer pool to 200 connections. That works
like a charm
when IIT is around what we expect, we get like 180 IIT, 20 actives (plus
the IIT which
awakens) and the rest gets queued up.
However the problem comes when IIT numbers are different. Sometimes we get
"only"
60-80 IIT, that allows more than 100 simultaenous active sessions and the
server can't
handle it well.
So the concept I tried to get on with the proposed solution would be to
give total
control on the amount of connections allowed to execute at the same time,
like saying: I want at most 30 active connection "slots", 10 reserved for
new
transactions and 20 to process old waiting transactions when they wake up.
Hope it's clear now. Honestly I had been expecting this to be a somewhat
common
problem, wonder if I had been mistaken about that.
I think I understand what you are trying to achieve. From my perspective,
it has nothing at all to do with idle in transaction connections, and that
is just confusing people. What I think you are trying to achieve is to
throttle the number of connections that may be actively running queries. If
so, I have no idea if it is possible to do that in pgbouncer or if you
current approach will work (I'm not able to competently review your patch).
I do think it would be better achieved in PostgreSQL, but the feature would
not be available until the 10.0 release at the earliest, so maybe if it
isn't too invasive a feature it might belong in pg_bouncer.
--
Stuart Bishop <***@canonical.com>
Gustavo R. Montesino
2016-08-10 10:18:19 UTC
Permalink
Hello,

So I've finally prepared a first draft patch to address this question
(attached).

Considering all the feedback and a bit more digging on the code I've
changed my intended approach and implemented it as a configuration option,
"ignore_idle_tx", which when set moves iit servers to another list
(idletx_server_list) opening more "space" for new servers.

With this setting on, the number of servers in bouncer can go way higher
than the pool size, so a side effect was the need to put checks in some
places to guarantee only up to pool size would get active. After some
thinking i've opted to let these checks active even when the option is not
set; it does change the behaviour of the bouncer a bit but I think it'll
get a better response to pool size changes this way, and also if the option
gets activated or deactivated at runtime.

This also required some changes on reserve pool, which just spawned the
servers and then kept using then untill it got deallocated, considering any
existing server should be used. I've opted on making the check of waiting
time for every client before allocating a reserve pool connection; as this
check is made on janitor and the server get activated to get it past the
check on find_server() I've also added a flag to (client) socket to
indicate that this specific client should use a reserve connection.

The only point I'm not really that satisfied about this patch is on
takeover, as sv_idle servers are going to sv_active on the new bouncer. I
haven't thought of any way of changing that without changing "SHOW FDS",
and changing that seemed bad.

This is pretty much my first time with pgbouncer's code, so it's very
likely I missed something... please let me know and I'll work on the needed
changes/fixes when possible.


Thanks and regards,

Gustavo R. Montesino
Gustavo R. Montesino
2016-09-06 09:50:21 UTC
Permalink
Hello,
Post by Gustavo R. Montesino
Hello,
So I've finally prepared a first draft patch to address this question
(attached).
Has anyone managed to take a look at this? Any feedback, good or bad, would
be appreciated.


Regards,

Gustavo R. Montesino.
Gustavo R. Montesino
2016-09-13 09:42:48 UTC
Permalink
Hello,
Hi Guastavo,
Post by Gustavo R. Montesino
So I've finally prepared a first draft patch to address this
question (attached).
Has anyone managed to take a look at this? Any feedback, good or bad,
would be appreciated.
I just read it, quickly, it looks good (I didn't test).
Thanks for looking at this.
I'm interested by performances impact (if any) for usual use case of
pgbouncer (massive OLTP in transaction mode for example).
Can you provide some pgbench numbers with/without patch and with/without
idletx option set ?
I've made a few test runs. It's not exaustive testing but I guess it can be
used for drawing some conclusions... These tests where made on oldish
desktop class hardware, postgresql 9.5 on mostly stock settings (no
optimizations). pgbench was initialized with scale 64.

First, a quick baseline with direct connections (no bouncer), 3 minutes for
run (-T 180):

12 clients: 147.641126 TPS
24 clients: 212.541653 TPS
36 clients: 153.125731 TPS
48 clients: 156.216199 TPS

Unpatched bouncer with 24 pool size, 10 minutes for run (-T 600):

24 clients: 176.130626 TPS
48 clients: 182.963496 TPS

Patched bouncer with ignore_idletx=1, 24 pool size, 10 minutes for run (-T
600):

24 clients: 161.566736 TPS
48 clients: 168.525678 TPS

Patched bouncer with ignore_idletx=0, 24 pool size, 10 minutes for run (-T
600)

24 clients: 92.155826 TPS
48 clients: 125.362078 TPS


If I haven't messed the testings, I think we can reach some conclusions: It
seems the current patch has too much overhead, as shown by idletx=0 vs
unpatched. By the other hand the results with idletx=1 seem to indicate
that maybe this could get a performance boost even with well designed
transactions if we can somehow reduce this overhead, considering how close
the values where compared to unpatched bouncer, which isn't something I
expected (my aim was more on surviving bad designed transactions).

I'll take another look to see if I can reduce all this overhead somehow.
Maybe reducing the amount of list countings...


Regards,

Gustavo R. Montesino
Gustavo R. Montesino
2016-09-28 10:14:11 UTC
Permalink
Hello again,
Post by Gustavo R. Montesino
Hello,
I'm interested by performances impact (if any) for usual use case of
pgbouncer (massive OLTP in transaction mode for example).
Can you provide some pgbench numbers with/without patch and with/without
idletx option set ?
I've made a few test runs. It's not exaustive testing but I guess it can
be used for drawing some conclusions... These tests where made on oldish
desktop class hardware, postgresql 9.5 on mostly stock settings (no
optimizations). pgbench was initialized with scale 64.
First, a quick baseline with direct connections (no bouncer), 3 minutes
12 clients: 147.641126 TPS
24 clients: 212.541653 TPS
36 clients: 153.125731 TPS
48 clients: 156.216199 TPS
24 clients: 176.130626 TPS
48 clients: 182.963496 TPS
Patched bouncer with ignore_idletx=1, 24 pool size, 10 minutes for run (-T
24 clients: 161.566736 TPS
48 clients: 168.525678 TPS
Patched bouncer with ignore_idletx=0, 24 pool size, 10 minutes for run (-T
600)
24 clients: 92.155826 TPS
48 clients: 125.362078 TPS
It seems the current patch has too much overhead, as shown by idletx=0 vs
unpatched. By the other hand the results with idletx=1 seem to indicate
that maybe this could get a performance boost even with well designed
transactions if we can somehow reduce this overhead, considering how close
the values where compared to unpatched bouncer, which isn't something I
expected (my aim was more on surviving bad designed transactions).
I'll take another look to see if I can reduce all this overhead somehow.
Maybe reducing the amount of list countings...
I've prepared a new version of the patch using removing the statlist_count
on janitor the same way it's already made for sv_idle and sv_used, attached.

I've also run a few more testes and noticed the results are unstable; it
would be great if someone with a better testing platform could also run a
few tests. Anyway, below are some results, all pgbench test runs for 15
minutes. The first number is average and the following ones individual test
runs:

Direct, 24 clients: 108.136176 (141.834432, 73.674168, 143.475734,
73.560371)
Direct, 48 clients: 178.174707
(195.526421, 118.463149, 199.354630, 199.354630)

Unpatched, 24 clients: 134.473196
(172.947613, 96.265347, 172.485530, 96.194294)
Unpatched, 48 clients: 158.586667
183.491312, 130.868798, 184.081353, 131.905207)

idletx 0, 24 clients: 116.956359 (146.635479, 82.920101, 156.202773,
82.066812)
idletx 0, 48 clients: 146.800299
(180.963383, 112.354390, 181.660727, 112.222697)

idletx 1, 24 clients: 127.972326
(165.523840, 90.471877, 166.164397, 89.729198)
idletx 1, 48 clients: 142.601821
(176.299587, 121.563177, 147.517611, 125.026908)

The only additional change I can see for now to optimize the code would be
to put some "ifs" around new logic to use something more like the current
logic when ignore_idletx=0; I can make this change if you think it would be
better that way.


Regards,

Gustavo R. Montesino
Gustavo R. Montesino
2016-09-28 10:16:27 UTC
Permalink
Post by Gustavo R. Montesino
I've prepared a new version of the patch using removing the
statlist_count on janitor the same way it's already made for sv_idle and
sv_used, attached.
Which I obviously forgot to attach....
Post by Gustavo R. Montesino
I've also run a few more testes and noticed the results are unstable; it
would be great if someone with a better testing platform could also run a
few tests. Anyway, below are some results, all pgbench test runs for 15
minutes. The first number is average and the following ones individual test
Direct, 24 clients: 108.136176 (141.834432, 73.674168, 143.475734,
73.560371)
Direct, 48 clients: 178.174707
(195.526421, 118.463149, 199.354630, 199.354630)
Unpatched, 24 clients: 134.473196
(172.947613, 96.265347, 172.485530, 96.194294)
Unpatched, 48 clients: 158.586667
183.491312, 130.868798, 184.081353, 131.905207)
idletx 0, 24 clients: 116.956359 (146.635479, 82.920101, 156.202773,
82.066812)
idletx 0, 48 clients: 146.800299
(180.963383, 112.354390, 181.660727, 112.222697)
idletx 1, 24 clients: 127.972326
(165.523840, 90.471877, 166.164397, 89.729198)
idletx 1, 48 clients: 142.601821
(176.299587, 121.563177, 147.517611, 125.026908)
The only additional change I can see for now to optimize the code would be
to put some "ifs" around new logic to use something more like the current
logic when ignore_idletx=0; I can make this change if you think it would be
better that way.
Regards,
Gustavo R. Montesino
Loading...