Discussion:
[erlang-questions] Memory Leak
yikang zhuo
2015-10-13 03:19:10 UTC
Permalink
after i eval gc manually:
X=[erlang:garbage_collect(P) || P <- erlang:processes(), {status, waiting}
== erlang:process_info(P, status)].

top res -> 8.3g
erlang:memory() -> 3.6G


[{total,3694253080},
{processes,3301211768},
{processes_used,3300343808},
{system,393041312},
{atom,744345},
{atom_used,715567},
{binary,254047488},
{code,18840350},
{ets,61244696}]

wow.. does memory Leak in ejabberd or erlang 17.5 erts-6.4
Vance Shipley
2015-10-13 03:43:29 UTC
Permalink
Post by yikang zhuo
top res -> 8.3g
erlang:memory() -> 3.6G
...
Post by yikang zhuo
wow.. does memory Leak in ejabberd or erlang 17.5 erts-6.4
Not in the buggy way you may suspect:
https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole

Jump down to the section titled "I Just Keep on Bleeding and I Won't Die".
--
-Vance
Michael Truog
2015-10-13 05:18:27 UTC
Permalink
Post by Vance Shipley
Post by yikang zhuo
top res -> 8.3g
erlang:memory() -> 3.6G
...
Post by yikang zhuo
wow.. does memory Leak in ejabberd or erlang 17.5 erts-6.4
https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole
Jump down to the section titled "I Just Keep on Bleeding and I Won't Die".
There was a GC bug in 17.5 mentioned at http://erlang.org/pipermail/erlang-questions/2015-June/085032.html as OTP-12821 . You may not be running into that though. I have seen abnormal GC memory growth with the 17.5 release (from the download page) but not the 18.0 release, and am not sure about the exact changes it was related to (only that it caused the Erlang VM to be killed by the Linux OS, if you are interested in the testing, it is at https://groups.google.com/forum/#!topic/cloudi-questions/bw2D7YOFtKU).
yikang zhuo
2015-10-13 08:21:46 UTC
Permalink
"I Just Keep on Bleeding and I Won't Die " very humorous...

i follow your post the see what happend in my erlang node , and the binary
allocted is also my biggest problem, it almost have 5Gand the usage rate
is very low

rp(recon_alloc:fragmentation(current)).

{mbcs_carriers_size,1438941360
{mbcs_carriers_size,908361904
{mbcs_carriers_size,928284848
{mbcs_carriers_size,817135792
{mbcs_carriers_size,556302512
{mbcs_carriers_size,479494320
{mbcs_carriers_size,399278256
{mbcs_carriers_size,271876272
{mbcs_carriers_size,32944

sum -> 5 799 708 208

{mbcs_usage,0.09427699263575272},
{mbcs_usage,0.010110173004349157},
{mbcs_usage,0.05721490350125805},
{mbcs_usage,0.03553723173589733},
{mbcs_usage,0.0037855806051078915},
{mbcs_usage,0.002756687503618395},
{mbcs_usage,0.0015088825673492223},
{mbcs_usage,0.0015768937717374615},
{mbcs_usage,0.003399708596406022},


but it look like a bug more than alloctor fragment , the default alloctor
strage is {as,aoffcbf}]}, does aoffcbf can as bad as such low usage...
my erlang is erts6.4 r17.5 ,

this post describle a memory leak about r17.3
http://erlang.2086793.n4.nabble.com/Possibly-memory-leak-in-R17-td4690007.html
https://github.com/erlang/otp/blob/maint/erts/emulator/internal_doc/CarrierMigration.md#searching-the-pool

but i use r17.5.



rp(erlang:system_info({allocator,binary_alloc})).
[{instance,0,
[{versions,"0.9","3.0"},
{options,[{e,true},
{t,true},
{ramv,false},
{sbct,524288},
{asbcst,4145152},
{rsbcst,20},
{rsbcmt,80},
{rmbcmt,50},
{mmbcs,32768},
{mmmbc,18446744073709551615},
{mmsbc,256},
{lmbcs,5242880},
{smbcs,262144},
{mbcgs,10},
{acul,0},
{as,aoffcbf}]},

does

-->> all

[{{binary_alloc,1},
[{sbcs_usage,1.0},
{mbcs_usage,0.09427699263575272},
{sbcs_block_size,0},
{sbcs_carriers_size,0},
{mbcs_block_size,135659064},
{mbcs_carriers_size,1438941360}]},
{{binary_alloc,4},
[{sbcs_usage,1.0},
{mbcs_usage,0.010110173004349157},
{sbcs_block_size,0},
{sbcs_carriers_size,0},
{mbcs_block_size,9183696},
{mbcs_carriers_size,908361904}]},
{{binary_alloc,2},
[{sbcs_usage,1.0},
{mbcs_usage,0.05721490350125805},
{sbcs_block_size,0},
{sbcs_carriers_size,0},
{mbcs_block_size,53111728},
{mbcs_carriers_size,928284848}]},
{{binary_alloc,3},
[{sbcs_usage,1.0},
{mbcs_usage,0.03553723173589733},
{sbcs_block_size,0},
{sbcs_carriers_size,0},
{mbcs_block_size,29038744},
{mbcs_carriers_size,817135792}]},
{{binary_alloc,5},
[{sbcs_usage,1.0},
{mbcs_usage,0.0037855806051078915},
{sbcs_block_size,0},
{sbcs_carriers_size,0},
{mbcs_block_size,2105928},
{mbcs_carriers_size,556302512}]},
{{binary_alloc,6},
[{sbcs_usage,1.0},
{mbcs_usage,0.002756687503618395},
{sbcs_block_size,0},
{sbcs_carriers_size,0},
{mbcs_block_size,1321816},
{mbcs_carriers_size,479494320}]},
{{binary_alloc,7},
[{sbcs_usage,1.0},
{mbcs_usage,0.0015088825673492223},
{sbcs_block_size,0},
{sbcs_carriers_size,0},
{mbcs_block_size,602464},
{mbcs_carriers_size,399278256}]},
{{binary_alloc,8},
[{sbcs_usage,1.0},
{mbcs_usage,0.0015768937717374615},
{sbcs_block_size,0},
{sbcs_carriers_size,0},
{mbcs_block_size,428720},
{mbcs_carriers_size,271876272}]},

Post by Vance Shipley
Post by yikang zhuo
top res -> 8.3g
erlang:memory() -> 3.6G
...
Post by yikang zhuo
wow.. does memory Leak in ejabberd or erlang 17.5 erts-6.4
https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole
Jump down to the section titled "I Just Keep on Bleeding and I Won't Die".
--
-Vance
Loading...