Analyzing a WCF client application (that I did not write and still do not know too much about) that talks to a bunch of services via SOAP and after running for a couple of days will throw an OutOfMemoryException, I found out that .net's PooledBufferManager will never ever release unused buffers, even when the app runs out of memory, leading to OOMEs.
This of course being in accordance with the spec: http://msdn.microsoft.com/en-us/library/ms405814.aspx
The pool and its buffers are [...] destroyed when the buffer pool is
reclaimed by garbage collection.
Please feel free to answer to only a single of the questions below, as I have a bunch of questions, some of a more general nature, and some specific to our app's use of the BufferManager.
First a couple of general questions about the (default Pooled)BufferManager:
1) In a environment where we have GC, why would we need a BufferManager that will hold on to unused memory, even when that leads to OOME? I know, there is BufferManager.Clear(), which you can use to manually get rid off all buffers - if you have access to the BufferManager, that is. See further down for why I don't seem to have access.
2) Despite of MS' claim that "This process is much faster than creating and destroying a buffer every time you need to use one.", shouldn't they leave that up to the GC (and its LOH for example) and optimize the GC instead?
3) When doing a BufferManager.Take(33 * 1024 * 1024), I will get a buffer of 64M, as the PooledBufferManager will cache that buffer for later reuse, which might - well, in my case it isn't and therefore it's pure waste of memory - be that, say, 34M, or 50M, or 64M, are needed. So was it wise to create a potentially very wasteful BufferManager like this, that is used (by default, I assume) by HttpsChannelFactory? I'm failing to see how the performance for memory allocation should matter, especially when we are talking about WCF and network services that the application will talk to every 10 seconds TOPS, normally many more seconds or even minutes.
Now some more specific questions related to our application's use of BufferManagers. The app connects to a couple of different WCF services. For each of them we maintain a connection pool for the http connections, as connections may occur concurrently.
Inspecting the single biggest object in one heap dump, a 64M byte array that had only been used once in our app at initialization time and is not needed afterwards, as the response from the service is that big only at initialization time, which btw. is typical for many applications I have used, even though that could be subject to opimization (caching to disk etc.). A GC root analysis in WinDbg yields the following (I sanitized the names of our proprietary classes to 'MyServiceX', etc.):
0:000:x86> !gcroot -nostacks 193e1000
DOMAIN(00B8CCD0):HANDLE(Pinned):4d1330:Root:0e5b9c50(System.Object[])->
035064f0(MyServiceManager)->
0382191c(MyHttpConnectionPool`1[[MyServiceX, MyLib]])->
03821988(System.Collections.Generic.Queue`1[[MyServiceX, MyLib]])->
038219a8(System.Object[])->
039c05b4(System.Runtime.Remoting.Proxies.__TransparentProxy)->
039c0578(System.ServiceModel.Channels.ServiceChannelProxy)->
039c0494(System.ServiceModel.Channels.ServiceChannel)->
039bee30(System.ServiceModel.Channels.ServiceChannelFactory+ServiceChannelFactoryOverRequest)->
039beea4(System.ServiceModel.Channels.HttpsChannelFactory)->
039bf2c0(System.ServiceModel.Channels.BufferManager+PooledBufferManager)->
039c02f4(System.Object[])->
039bff24(System.ServiceModel.Channels.BufferManager+PooledBufferManager+BufferPool)->
039bff44(System.ServiceModel.SynchronizedPool`1[[System.Byte[], mscorlib]])->
039bffa0(System.ServiceModel.SynchronizedPool`1+GlobalPool[[System.Byte[], mscorlib]])->
039bffb0(System.Collections.Generic.Stack`1[[System.Byte[], mscorlib]])->
12bda2bc(System.Byte[][])->
193e1000(System.Byte[])
Looking at gc roots for other byte arrays managed by a BufferManager reveals that other services (not 'MyServiceX') have different BufferPool instances, so each one is wasting their own memory, they are not even sharing the waste.
4) Are we doing something wrong here? I'm not a WCF expert by any means, so could we make the various HttpsChannelFactory instances all use the same BufferManager?
5) Or maybe even better, could we just tell all HttpsChannelFactory instances NOT to use BufferManagers at all and ask the GC to do its god-damn job, which is 'managing memory'?
6) If questions 4) and 5) can't be answered, could I get access to the BufferManager of all HttpsChannelFactory instances and manually call .Clear() on them - this is far from on optimal solution, but it would already help, in my case, it would free not only the aformentioned 64M, but 64M + 32M + 16M + 8M + 4M + 2M just in one service instance! So that alone would make my app last much longer without running into memory problems (and no, we don't have a memory leak issue, other than BufferManager, although we do consume a lot of memory and accumulate a lot of data over the course of many days, but that's not the issue here)
See Question&Answers more detail:
os