High Memory Usuage

Topics: Installing Orchard, Troubleshooting
Nov 10, 2011 at 2:53 PM


We have started working with Orchard, we have some very small webistes which sill take up alot of memory.  We had to turn on the App Pool so it doesnt timeout and refresh otherwise the server was completely busy compiling and loading the websites.

I have a site with a single image and a single page it takes 60mb of raw for the app pool, that is the smallest ran usage.  Another one uses 185MB.

I was not able to host the sites in a shared hosting environment because of memory and cpu usage.

Please help.  What can we do?



Nov 10, 2011 at 4:36 PM

Are you using Orchard 1.3 ?

Nov 10, 2011 at 4:36 PM

And also, can you enumerate all the modules you have installed ?

Nov 11, 2011 at 7:36 PM

I have a live instance consuming nearly 700mb once all three tenants have started. This is Orchard 1.3 with admittedly quite a lot of modules installed:

- Science Project - 9 modules, although only 3 of them have any features enabled

- Media Garden - also 9 modules, again not all enabled 

- CKEditor, Contrib.Cache, Contrib.DBCache, Contrib.Stars, Contrib.Voting, Iroo.VersionManager, Vandelay.Industries

- A further 7 custom modules specific to this site

- 3 custom themes (two of them are very thin inherited themes for Facebook and mobile versions)

- And of course all Orchard core modules

Now I realise this is a large number of modules - but 700mb seems like far too much. We've just had the server upgrade to 4gb so it's coping but ... is there any kind of profiling I can run to find out if any specific modules are doing anything silly here?

Nov 11, 2011 at 7:44 PM

Yes, it seems like you need to profile this. We've been using JetBrains' profiler, which while not free works really well.

Nov 11, 2011 at 8:28 PM

Yes, the version reported in Orchard is:  Orchard v.

Website: www.vbaresults.com

Memory usage = 147.3 MB
This site has almost no traffic or centent: 

Modules: Blogs, Containers, ContentTypes, Lists, Pages, Publish Later, Vandelay Favicon, Vandelay Meta, Remote Blog Publishing, xmlRpc, Keep Alive, Task Lease, Warmup, Media Picker, TinyMCE, Localization, Media, Email Messaging, Messaging, Navigation, Vandelay Cloud Tag, Gallery, Packaging, Packaging Commands, Lightweight scripting, Scripting, Social, Vandelay.FeedBurner, Feeds, Vandelay Remote RSS, Page Layer Hinting, Widgets. 

Plus all the core modules.

Any ideas would be appreciated.


Nov 11, 2011 at 8:46 PM

There is no way to tell from just that information. It needs to be profiled.

Nov 11, 2011 at 9:06 PM

Okay, and I need to buy JetBrains dotTrace Profiler to do that?

Nov 11, 2011 at 9:12 PM

No, there are other profilers on the market, it's just that it's a really good one. A cheaper but more tedious approach is to disable all modules and bring them back one by one and observe the memory footprint.

Nov 11, 2011 at 10:44 PM

Interestingly after running a while, it drops down to around 400mb RAM. Unfortunately I can't fork out the £200 *just* for JetBrain's memory profiler, at least not in the near future. So I'm going to get Glimpse running and see if that tells me anything useful ... and then try switching some suspect modules off and see what happens. I did try Microsoft's CLRProfiler which I heard was good enough for memory profiling, unfortunately it crashed the first time I ran it and the second time it locked up after shutting down the IIS service, rendering my server completely offline :S  ... locally, Orchard doesn't use anything like that amount of memory, so I have to do the profiling on my production server!

Nov 11, 2011 at 10:48 PM

Jetbrains offers a 10 day trial, fwiw.

Nov 11, 2011 at 10:57 PM

Be aware the .NET will take as much memory as available, and only run the GC if there is some memory pressure.

Nov 11, 2011 at 11:03 PM

And the GC is also behaving differently on server SKUs, isn't it?

Nov 11, 2011 at 11:08 PM
Edited Nov 11, 2011 at 11:09 PM
sebastienros wrote:

Be aware the .NET will take as much memory as available, and only run the GC if there is some memory pressure.

I had 600mb allocated to the app pool, and it was recycling constantly not long after it fired up because the memory limit was hit (rather than garbage collecting to conserve memory).

I upped the limit to 1.5gb. The process stabilises not much above 600mb (until I hit a second or third tenant). So it's not simply taking as much as available, it clearly *needs* 600mb; it's not using the fully available 1.5gb.

After a while, something is being garbage collected and it drops to 400mb, even though there was no memory pressure.

So, for some reason my server is running differently to how you think .NET will run?

Nov 11, 2011 at 11:17 PM

"After a while, something is being garbage collected and it drops to 400mb, even though there was no memory pressure."

Exactly what I described, but the GC is not deterministic, by design.

Nov 11, 2011 at 11:21 PM

I installed the 10-day trial of JetBrains dotTrace Memory ... not hugely impressed so far, when I selected "open web page in browser" it exited with an error "file not found". When I tried without that option, it just sits there saying "connecting" and never actually does anything ...

Nov 11, 2011 at 11:44 PM

I used this one during my work on multi-tenancy optimizations, and it worked great. http://memprofiler.com/


Nov 12, 2011 at 12:19 AM

Gave it a quick whirl, looks great ... will poke around the results some more tomorrow.

Nov 12, 2011 at 4:05 PM

Well, it works with the "attach to process" option, but data collection is very limited on that mode.

If I try a full ASP.NET profiling, everything locks up again. Clearly something in my server/IIS configuration is making all these profilers fail. The problem is, I can't keep doing this; it's a production server, and regularly stopping/starting/locking up the whole of IIS will not keep my clients happy!

The "attach to process" profile only shows 65mb of live data. So why is a single process taking up over 600mb, and not running GC even when memory is full? Or is it just because this profile is not able to be aware of all the data?

Nov 12, 2011 at 4:09 PM
Edited Nov 12, 2011 at 4:10 PM

Let me just clarify something: before the RAM upgrade, the server was maxing out its 2gb RAM (typically around 1.95gb). If anything was able to garbage collect, it was clearly not happening. All that's running on this server are a few small websites, and two Orchard instances - those instances are responsible for most of the memory usage. If it's true as Sebastien says that garbage collection will tend to run when memory is under pressure, then why wasn't it running when 2gb was nearly full? The server was running terribly; now since upgrading to 4gb it now runs very smoothly, and typical RAM usage is between 2-3gb.

Nov 12, 2011 at 10:54 PM

Maybe the question is:  What is the memory and server requirements to run an orchard site?

My server has really slowed down since I installed about 6-8 very small orchard sites, since now most of my 2gb of ram is not available to SQL Server or the OS.

It's a little disappointing, also when you see how much memory a PHP website uses, or even a .Net 2.0 site.

At this rate I will need a big power boost for my server before I can load another small site on it.

Nov 15, 2011 at 1:22 AM

It depends on the modules you have installed. You might want to consider multi-tenancy to host many small small sites on a single instance of Orchard.

Nov 15, 2011 at 7:29 AM

We have removed all the unused modules.  We have also changed the web.config file, including setting the Trust Level to full, debug mode to false, Also changed the App Pool Recycling to use 10mb for virtual ram and 10mb for physical ram.  This looks to have resolved the issue.  I will have to check this with our team after a day or 2.  We are also looking into the multi tenancy option. 

For now the sites we have changes still use more memory that we have allocated but they drop back down again to a ow memory amount of about 5mb.  That is down from 120mb plus to 5.5mb.  Not bad.  Can liove with that.

Nov 15, 2011 at 8:15 AM

Our team used this doc for reference:  http://www.orchardproject.net/docs/Optimizing-Performance-of-Orchard-with-Shared-Hosting.ashx


Nov 15, 2011 at 1:51 PM

Well after the whole day, 2 guys here working on it... no difference.  Orchard needs minimum 120mb dedicated ram for a very simple site with no content.  At first it looked promising, but it was just because the app pool was recycling, if that happenes then the load on the server is too high.

We have considered multi tenancy, but it sounds like 1 website with all the other websites being pages within that site.  Maybe we are missing something, but that is a hack, we are not interested in that configuration.

Tomorrow we will test memory profiler and see what we come up with.  But for now... Get another hamster!

Nov 15, 2011 at 2:58 PM
mjdobson88 wrote:

We have considered multi tenancy, but it sounds like 1 website with all the other websites being pages within that site.  Maybe we are missing something, but that is a hack, we are not interested in that configuration.

Multi tenancy isn't like that - they are completely segregated shells with their own database; effectively they *are* separate websites, they just doesn't consume as much memory as having separate Orchard instances.

The model you describe is actually something I'd like for certain applications but it doesn't exist as yet.

Nov 15, 2011 at 4:07 PM

Thanks for the reply.

Sounds like we should look at multi tenancy and see whats involved in setting it up.

Nov 15, 2011 at 4:48 PM

Just enable the "Multi Tenancy" feature and you can start adding new tenants from admin. Each tenant exists on a separate domain and has its own admin area. Tenants can even have different modules and features enabled.