You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This was brought up by @s3tqr2w in #2371 (comment) .
Apparently we currently fork one subprocess for each table in rethinkdb export, even if the number of clients is set to a lower number.
The text was updated successfully, but these errors were encountered:
Thanks guys. If you need any help I can assist. I think the simplest approach would be to have a -safe option or some such that tells it not to fork off for each table. More complex would be to manage and throttle the sub-processes somehow.
Note: for context, when backups run on our DB it consumes almost 2GB of memory. Our db is 346Mb in size.
As measured in my Ubuntu 15.04 64-bit VM, memory usage is around 15 + (17+27)*clients megabytes across 1 + 2*clients processes, regardless of number of tables with my change. Before, it was around 15 + 17*table_count + 27*clients megabytes of ram across 1 + table_count + clients processes.
In my specific test case, that meant several gigabytes of ram down to a hundred megs.
This was brought up by @s3tqr2w in #2371 (comment) .
Apparently we currently fork one subprocess for each table in
rethinkdb export
, even if the number of clients is set to a lower number.The text was updated successfully, but these errors were encountered: