Feature Team: Performance

Topics: Announcements
Coordinator
Oct 5, 2011 at 11:46 PM

This thread is for the Performance feature team. If you want to help with the design, implementation, testing and documentation of this feature, please chime in here.

The Performance feature's goal is to recognize and fix performance bottlenecks and address some of the top performance requirements of Orchard sites. Possible directions for this group include: adding indices where they make sense on core tables, optimize queries based on new CM methods or aggregate attribute, 2nd level caching, read uncommitted mode, memcache module, reverse proxy caching handling.

Initial members:

  • Chris Bower
  • Sébastien Ros
Coordinator
Oct 6, 2011 at 6:29 PM

Just a little post to precise those different points and their goals.

Orchard like any other CMS is prone to performance issue, because an extensible CMS can't be specialized, by definition, and by design ! But there are ways to mitigate those problems. Here is a list of what could be easily done to improve the overall performance, and even more make it amazingly fast.

- Indices

Today there is no indices at all in the generated database. This decision was done as it could be created by an external module. But it might be the time to tackle it, as big websites are coming, and need db optimization. It could still be done on a case by case basis, but we think that some default indices could benefit the majority of users.

Chris has already run some profiling and it appears some common indices are helping a lot.

Orchard's migrations can handle them, so adding indices would as simple as running an update. It could also be done as a separate module, nothing is decided, it will depend on the discussions on this thread.

A first approach would be to run query analysis and get a set of recommendations for most common queries, for the core, and also per module.

- Optimized queries

As of Orchard 1.3, a couple of methods have been introduced which allow some queries to be optimized. They are:

- GetMany()
This lets you specify a set of ids to be loaded with only one query. Whenever there is a loop over ids to load a content item, it should be substituted by this call. 

- QueryHints 
This is a parameter to Get and GetMany where a set of Record can be specified to be loaded eagerly, when it's well known that a content item has those parts, and to prevent a subsequent query to load each record.

- Aggregate 
This is an attribute which is set on Records' relationships, when this relationship is part of an aggregate in the DDD sense. It is used as an example in ContentTags record, as loading a TagsPart doesn't make sense without loading the Tag itself.

- Second level caching
Preventing to send queries to the DB if the result of the query can be cached by NHibernate directly, will reduce SQL queries. There are a bung of items which are queried over and over and could benefit from this option. Just thinking of the User object, or Site, ...

There is a module for that on the gallery, and might need some changes in the code to simplify the implementation. Might also get its place it Orchard's source code, but this would mean also documenting how to configure on farms, especially in Azure.

- Read uncommitted 
The current default isolation level is ReadCommitted, defined in code, in the TransactionManager class. This can be changed, or provided as a module.
The discussion might be about which way will provide the user with a choice, or no choice at all. It can be by settings, or provided by the Database Provider. The issue is that Sql Ce only supports ReadCommitted, while Sql Server also manages ReadUncommitted. It's an important choice as REadCommitted might trigger some locks more often, even more when there are heavy background task. And I think going on ReadUncommitted is very safe in Orchard's scenarios.

- Memcache module
There is the Cache module which is currently available on the forum. I personally made this one, and I already know how I will improve it, by handling donut caching. Something that could be done in conjunction with that change would be to use a distributed cache implementation on farms, and Azure.

- Reverse proxy caching
This is an important scenario for performance of heavy loaded websites. The cache module by itself helps a lot, but reverse proxy caching is unbeatable. Take a look at NGinx and Varnish for most used implementations. The only limitation today is based on Antiforgery tokens, but the improvements I intend to do for the Cache module will allow this scenario to be enabled. But I prefer to keep some suspense here ;)

Coordinator
Oct 6, 2011 at 6:45 PM

Short comment on this: improving core module performance will be great, but most people are running with additional modules. Can we take the top-downloaded modules and include them into any profiling we do on this feature team?

Coordinator
Oct 6, 2011 at 6:48 PM

Sir Yes Sir !

Developer
Oct 7, 2011 at 1:14 AM

Memcache is something I am going to consulte on, on top of this ITV is thinking about this as well.

Oct 7, 2011 at 1:04 PM

Count me in !!

Trello: aurelienverla

Oct 8, 2011 at 1:06 AM

Yes, I'll try and help out in this area as well.

Trello: kolektiv

Oct 20, 2011 at 6:31 PM

Sebastien reached out to me via email to get my input on the second level caching, and I'm responding on here so that it's included in this discussion.

My first point to make is that I need some clarification as to what you mean when you say second level caching.  I'm assuming you mean both second level caching and query caching.  The major difference is that second level caching only will affect objects that are loaded by ID, so in orchard's instance it would be when you call _someRepository.Get(someId);  This is implemented fully by the module I have in the gallery, and you can see the source and everything here.

As an overview of what had to be done, I implemented:

  • A fluent nhibernate convention that basically tells it to cache all the stuff that orchard knows about
  • An overridden AbstractDataServicesProvider that
    • Registers the cache convention
    • Does a hack that apparently works around the unused ContentPartRecord proxies on ContentItemRecord or ContentItemVersionRecord.  This has cause an intermittent issue that I've seen when the appdomain reloads.  This should be investigated further.  I believe Louis said that with NHib3, these hacks wouldn't be necessary, but I have no idea if that is the case.
  • An override for both of the database providers that configure them to use the syscache.

All that said, a few things should be looked at in terms of integrating this into the core.  1) The bug I mentioned with the unused proxies needs to be addressed.  It's been so long since I worked on this, that I don't even really remember what the exact issue was.  Also, the user should be able to enable/disable the cache, as well as select a different cache provider.  I haven't looked into the current state of caching with nhibernate 3, but I'm pretty sure the official source for providers is here.  The user should be able to select from the providers, and need to be able to provide configuration for any providers that require it (like memcached or velocity/appfabric).

Let me know if you have any questions regarding the second level cache.  I will do another post following this one regarding the query cache.

Oct 20, 2011 at 7:01 PM

Beyond the second level cache, you have the query cache, which caches queries that go beyond "get by id".  In my experience, this has been difficult because NHibernate needs to be able to be smart about invalidating the cache when something changes, and I've had trouble with that.  That said, I believe due to the rigidity of the Orchard data model, it has seemed in my testing to be very good (perfect in fact) in terms of invalidating the cache when necessary.  So it would be fantastic to get working.  I've gotten it partially working with changes to the Orchard core, but I ran into significant issues getting it to work for every query, and haven't had the chance to come up with a solution. 

As far as tracking down why certain queries weren't getting cached, these are my best guesses:

  • I think the CacheConvention isn't getting applied to the objects that I'm having issues with.  If I remember correctly (and I probably don't), I was seeing issues with items that were records without a corresponding part.  For example, if you look at the docs on creating 1-N and N-N relationships, the StateRecord class does not have a corresponding part and does not inherit from ContentPartRecord.  I believe the CacheConvention would not get applied to this class.
  •  I also think that there's a problem with NHibernate not being aware of some of the relationships that are lazy loaded, but I'm not sure.  Firstly, I'm wondering if NHibernate is not really able to deduce the true relationship of the lazy-loaded items (by this I mean items that are marked LazyField<T>) because the types don't match up properly.  In addition, I think that the way that lazy loading is handled in orchard (at least in the examples I've seen with a delegate in the handler) is inherently going to be in conflict with NHibernate caching in general, as the delegates (I think) will preempt NHibernates native ability to lazy-load which takes advantage of the cache.  These are guesses, however I believe that they are issues that need to be addressed.

Here are the archives I made that include the modifications I made the the DBCache module as well as Orchard Core (this was prior to the 1.3 release).  Note that these just contain the modified files.

  • http://dl.dropbox.com/u/563147/querycache-Orchard.zip
  • http://dl.dropbox.com/u/563147/querycache-Contrib.DBCache.zip

Let me know if there are any questions regarding the query cache.

Oct 20, 2011 at 7:07 PM

Also in response to the Read(Un)Committed point - I'm not 100% sure but I'm fairly positive that I remember from a previous project that this setting impacted the query cache.  I don't think it's supposed to, but we were getting really weird behavior.  This might have been a bug that has since been fixed, or an issue with our application's configuration, but I believe we had to turn on ReadUncommitted to get the querycache to work fully.  

Just something to keep in mind.

Coordinator
Oct 20, 2011 at 7:57 PM

Wow, thanks for the great info. If one of the outputs of the work of this group is to change the prescriptions in the 1-n and n-n article, that's fine.

Oct 20, 2011 at 9:47 PM

Just to be clear, I'm not really confident on the query cache culprits, I'm just making educated guesses.  I'm not sure how much time, if any, I'll have to dig into this further in the near future, but just so you know the NHProfiler is fantastic for at least determining what is and is not being cached.  Also, I don't know if he'd be interested in dedicating any time, but I think ayende would be able to tell you in about 30 seconds what the problem is.  He knows his stuff.

Nov 1, 2011 at 11:21 AM
Edited Nov 1, 2011 at 1:32 PM

I've started adding some indexing in my own modules. I'm wondering if it could be beneficial to have more options exposed in the CreateIndex method? I'm no DBA and my knowledge of indexing is poor at best!

Nov 4, 2011 at 9:31 PM

I just looked at the built in CacheManagement objects, and I've got a question.  Is there a reason why it doesn't give you the ability to set expiration information and you can't update/invalidate something in the cache?

Coordinator
Nov 4, 2011 at 9:36 PM

It does, and you can. That's all through ISignal. Check out existing usage of the interface.

Nov 4, 2011 at 9:41 PM

Ah I see.  Interesting, haven't seen something like that before.  Thanks.

Nov 10, 2011 at 5:46 PM
Edited Nov 10, 2011 at 5:48 PM

I've just seen this in IndexingTaskExecutor:

                    indexSettings.LastIndexedId = _taskRepository
                        .Table
                        .OrderByDescending(x => x.Id)
                        .Select(x => x.Id)
                        .FirstOrDefault();

Is my understanding of Linq correct, that this will result in the whole table being queried? Can it be optimised with a simple Take(1)?

Coordinator
Nov 10, 2011 at 5:48 PM

I would assume that the LINQ provider is clever enough to do it by itself. Can easily be validated ! Do you want to do it ?

Nov 10, 2011 at 6:04 PM
sebastienros wrote:

I would assume that the LINQ provider is clever enough to do it by itself. Can easily be validated ! Do you want to do it ?

You know, I always assumed that only Take(x) or Skip(x) would page a query, but a quick test in LinqPad shows that TOP(1) gets included in the SQL with FirstOrDefault. In Linq to SQL at least; I assume it's the same in NHibernate. Learned something there :)

Coordinator
Nov 10, 2011 at 6:19 PM

It depends on each provider ... might be different with NHibernate, at least this version.

Nov 11, 2011 at 2:39 PM

A different method to determine where Database Indices are needed is using a specific SQL query to retrieve the slowest queries, and then inspecting the execution plan of those queries. I wrote a blog post about this on http://developer.3l.nl/post/12521623971/speeding-up-orchard-database-indexes-in-sql-server.

I'm going to use this method to post a couple of indexes here.

I agree with randompete that a few extra's in the CreateIndex method would really help out developers that don't know their indexes too well but are experiencing performance issues.

Coordinator
Nov 11, 2011 at 5:11 PM

That would be awesome if you could go over more tables like that, and ultimately have some metrics to compare with and without the indexes. There is a Profile project in Orchard's solution for such scenarios. Might be a good starting point.

Nov 23, 2011 at 3:12 PM

I've been trying to find out where other performance bottlenecks are, apart from the database.

Red-Gate Ants profiler didn't get very far.  If I let it attach to IIS directly, it restarts IIS in .net2.0 mode instead of .net4.0. When I let it create its own webserver, I do get .net4.0, but am received with the message “Operation could destabilize the runtime” in the WarmupHttpModule constructor (like http://orchard.codeplex.com/discussions/250861). Not very comforting.

However, I'm now looking at Eqatec profiler, and that gives a nice drilldown on what's happening. And it's got a limited time offer to get the full version for free!

Now to see if I can speed up anything...

Nov 23, 2011 at 5:31 PM

This is how I profiled: http://developer.3l.nl/post/13209357555/profiling-orchard-with-eqatec-profiler

Below are two possible performance tweaks. Is it OK to just post them here? I could also create pullrequests, if the ideas are valid...

Caching SiteSettings
The SiteService.GetSiteSettings does a database call each time it gets called. Looks unneccessary, as long as the cache gets invalidated when settings are saved. Is there a reason this isn't already cached?

Cache.Get
The Cache.Get(key, acquire) is called a lot of times. So best make it as fast as possible. Why doesn't it start with:

if (_entries.ContainsKey(key))
  return _entries[key].Result;
That way UpdateEntry isn't called each time you want to get an item from the cache!

Coordinator
Nov 23, 2011 at 5:35 PM

We tried to cache the Site Settings for 1.3, and we reverted at last minute because caching Records was a bad idea. We should instead cache a DTO. Doable. But it would be better spending some time on the 2nd level NH caching module that has already been discussed. This would solve this and a lot of other queries.

About the Cache.Get, can you provide metrics with and without the change ? It might be called 1 billion times, but what does it costs ? But if it's trivial, let's do it if it saves some CPU ... again metrics can help decide.

Nov 25, 2011 at 4:01 PM

The Cache.Get isn't as easily upscaled as I thought. The fix where it returns the entry doesn't work because then it never gets invalidated (that's what the token.isCurrent does).
What does help in this function is to not use Linq statements. Token.IsCurrent gets executed so many times (a few hundred thousand times per request) that Linq provides an overhead that you can easily remove with a simple foreach statement.

A normal page request is taking up a lot of CPU. Response time for a page on my machine is about 600ms. That's OK, but when I do a lot of requests simultaneously the response times drastically up, to where a request takes more than 3 seconds to process. The high-profile websites we're creating at Q42 need to be more stable than that.

I enabled the outputcaching module, which does a nice job. A request now averages 44 ms, and when I also cache the SiteSettings, it drops to 4. The problem is that a lot of my information can't be cached, so I'm now re-writing the site so that the non-cachable content is requested by ajax.

So the problem is that my sites are now heavily relying on outputcaching at the moment, because the layouting engine of Orchard doesn't perform very well, and it certainly doesn't seem to scale. Is this a known issue and am I too greedy, or am I implementing it wrong?

Dec 1, 2011 at 3:51 PM
Edited Dec 1, 2011 at 3:52 PM

Hi, I posted a question about a performance issue I'm having (http://orchard.codeplex.com/discussions/281450). Can anyone here take a look and possibly let me know why page loads are slow and CPU use spikes after 20-30 minutes of idle time? This isn't necessarily a request to resolve, because I'm aware the warmup feature might fix the problem,rather, I want to know what the underlying cause is. Is there a way to fix it without having to set up warmup? And how/does warmup work for authenticated users? 

 

Also, I'm getting familiar with Orchard and looking to use it on a current large-ish project. Even if we don't end up using Orchard as our CMS for this project I'd like to continue working with Orchard for other projects and contribute to the development. I volunteer myself for help on performance improvements, though it will still take more time before I am familiar enough to start making contributions. What's the best place for me to start looking at so I can start helping out? Right now performance issue(s) is the only thing I see that could potentially make us not go forward with Orchard. If we have to do the warmup for every page, or if the warmup doesn't work for Authenticated users it could be a showstopper for using Orchard on this project. 

 

Thanks!

Dec 1, 2011 at 4:59 PM

The startup time is when Orchard is loading all the installed modules and compiling them if necessary, and performing a few other initialization tasks. Dynamic recompilation should only happen when something has changed that needs it, and of course that takes even longer.

The reason you see this happen after 20-30 mins idle is because the default settings of IIS dictate recycling the AppPool after that amount of time. Once the AppPool recycles, Orchard has to perform a cold start. If you have control over IIS on your server then this isn't a problem, you can extend or even remove the time limit. Otherwise you might want a system that regularly pings the site to keep it alive (there's a module available for this).

The Warmup module helps out by caching pages in full, and serving up the cached versions during startup. You have to tell it a list of pages (in Settings) that you want handled like this. The problem with it is that, of course, dynamic content won't work as expected, and you will get problems with AntiForgery tokens if you have any forms on your page (I discovered this when I tried to add a login form to my home page).

Obviously if there are any ways at all to reduce startup time, that would be a massive bonus!

If you want to dig deeper, you could try running some profiling on your Orchard instance to see what is eating up time, and identify areas that can be optimised or just outright cached (there's a decent cache mechanism built into Orchard, and there's an additional Cache module which performs output caching).

Dec 1, 2011 at 5:23 PM
Edited Dec 1, 2011 at 5:29 PM

Thanks for your response. On my laptop (where I first saw the slowness-after-idle issue) I did disable the time based recycling in IIS and I still had periods where the site was churning for 20+ seconds on initial page loads after periods idle time. But the problem might be that I made changes in the admin panel that required dynamic recompiles as the last thing I did before going idle, and then my next page request triggered the dynamic recompile. I'm doing more controlled tests of this issue on my desktop right now. My desktop machine is a better environment for this b/c it never goes to sleep or hibernates like my laptop (which could also be part of the problem), and is a little beefier (though the laptop is powerful enough to beat some smaller Cloud server instances).

 

I'll report back with my results, and it's probably not a real problem. It's taking a while because I have to log the access times and then wait 30 minutes before accessing anything again. I'll try 1 hour if the 30 min tests are successful, and then 6h and 12h. With the more controlled tests on my desktop I was able to observe csc.exe do a quick CPU spike on the first access to each page, and then subsequent page accesses were pretty instant. I'm guessing that's the dynamic compilation. 

What exactly is happening w/ regards to dynamic compilation? Is it just the traditional asp.net doing the dynamic web app compilation of .cs files, or is Orchard doing some more complex ad-hoq dynamic compilation (like when content types/parts/fields are added/changed?)? And either way, are the results of the compilation cached on disk?

 

I signed up for the free trial of Eqatec profiler which others on this board have recommended so I can later profile Orchard. Their confirmation email says they will get back to me in 1-2 days with the license. Seems like the free offer on the Corporate license (normally $999) is still available -- grab it while you can! 

Dec 1, 2011 at 6:06 PM

After 30min idle the laggy page load issue still existed, even though in IIS settings and under Application Pools -> myorchardapppool -> recycling..., the auto app pool recycling was disabled.

I found this message in event logs: 

A worker process with process id of '5760' serving application pool 'myorchardapppool' was shutdown due to inactivity.  Application Pool timeout configuration was set to 20 minutes.  A new worker process will be started when needed.

Looked around more and found a setting in Application Pools -> myorchardapppool -> advanced settings -> Idle Time-out, which was set to 20 minutes. Changed this to 0 and I'm redoing my test. Just thought it's nice to document exactly how to workaround this issue because I've seen a few people complain about it. Hopefully this solves it. 

Dec 9, 2011 at 7:26 PM

In another discussion this module was mentioned: http://orchardprofiler.codeplex.com/

Just had a play with it, only slightly tricky getting it working, but the profiling looks pretty good, and MVC-centric. It even shows you all the SQL queries and their execution time. I'm hoping it can be extended more to allow custom profiling.

Dec 19, 2011 at 2:04 AM

Please fix Orchard to correct processing of .axd handlers. v1.3 return 404 on any handler. Visual Studio Profiler refused to profile in this case.

Coordinator
Dec 19, 2011 at 2:39 AM

@gandjustas: you are off-topic. This should be in a different thread. Plus, this is by design, you can restore the handlers you need in web.config.

Dec 19, 2011 at 1:53 PM

What kind of support does/will Orchard have for scaling database performance? 

If an Orchard implementation grows large, is there a way to shard the db, or to have the application function with some of the features interacting with the database in read-only mode? For example, in some of the applications I've worked on in the past, we split it up so that the admin part of our (home grown) CMS wrote to a master DB which was replicated to a couple of other read-only DB servers. The public facing part of the app that dealt with displaying that data distributed the DB read operations across the 2 db servers, freeing up the "main" db server for CMS writes, and other stuff like registrations, logins, etc.

Similarly, are there other strategies for scaling DB performance that are being planned or have been planned? 

Coordinator
Dec 20, 2011 at 12:19 AM

Nothing like that is planned at this point. Are you working on a site that would require it?

Dec 20, 2011 at 2:31 AM

Actually we are working on a similar scenario for multi data center deployment. Initially i thought we could get away with import/export but the requirements seem to be evolving.

Our requirements are as follows

We can identify in the website, which http requests are going to initiate a write. For such writes  (and subsequent reads in the same request) we have to hit a master sql server inrrespective of the datacenter the user is hitting.

 

Our proposed solution which we are researching

First setup Orchard tables to ensure Replication is possible.

Second, set up a http cookie (or) analyze the routes to setup a request wide parameter flagging that all writes/reads should happen thru master sql

Third, override the getSession functionality which returns nHibernate session to "detect" such a request wide parameter and ensure the "write" connection is returned

 

We think above should logically work based on our research.. Bertrand, any thoughts on above (or) are we wasting our time.

Dec 20, 2011 at 2:34 AM

If we succeed in this initiative, i will be happy to write a blog on this approach. The design has a lot of goodies such as global distributed caches etc, which we are working with a vendor.

Dec 20, 2011 at 4:02 PM

After profiling the database and using the MVC Mini profiler, you can lets of duplicate queries being run against the database.

The homepage is querying for the widgets in the zones as well as layer rules, these are 400+ reads too, we could add all this to an ICacheManager?

I've enabled the Contrib.DBCache module which doesn't seem to work, looking in NHprof, the Query Cache and the Second Level Cache Hit Count are 0. Currently trying to figure out why.

Noticed AdvancedMenu is quite heavy on reads 400+.

Not sure why but the home page is running a query to get some details from the blogs on the site, its using 15% cpu when doing it, even though we have no widgets surfaced that care about the blogs?

Using NHProf it shows 30+ N+1 selects for a simple homepage.

 

Dec 20, 2011 at 4:09 PM

See this discussion on Mini Profiler: http://orchardprofiler.codeplex.com/discussions/282499

If it could tell us where the queries were originating from it would be a lot more useful :)

Coordinator
Dec 21, 2011 at 6:08 PM

Hi all,

Might want to try the current 1.x branch as I pushed some changes regarding how queries are generated. I analyzed all requests from a vanilla homepage, and divided their number by two, just by playing with the implementation details of the API.

Those performance improvements should be also seen automatically on other pages. I will soon focus on specific modules with heavy content.

Dec 27, 2011 at 5:56 AM

This suggestion is most likely pretty minor but every little bit helps when improving performance. This is something i just learned to improve performance in ASP.NET apps in general, but I think it should also apply to Orchard. Someone please let me know if Orchard's routing system for some reason precludes this from working. 

 

In general, in MVC apps, when you name a route in Global.asax.cs:

 

namespace MyApp
{
    public class MvcApplication : System.Web.HttpApplication
    {
        public static void RegisterGlobalFilters(GlobalFilterCollection filters)
        {
            filters.Add(new HandleErrorAttribute());
        }


        public static void RegisterRoutes(RouteCollection routes)
        {
            routes.MapRoute(
                "TestLinks", // Route name
                "TestLinks", // URL with parameters
                new { controller = "TestLinks", action = "Index" } // Parameter defaults
            );
        }
    }
}


 

People often use Url.Action("ActionName", "Controller", new { property1 = "foo", property2 = "moo" } ), but the performance of that degrades significantly once the number of routes reaches a significant # (I couldn't say specifically what that # is), especially if you are rendering more than a couple dozen links per page. There are two optimizations to this: 

 

1. Use named Routes, and use UrlHelper.RouteUrl("RouteName", ... ) instead of Url.Action(). The ASP.NET MVC system will be able to jump straight to the route with the specified name instead of testing potentially all the routes to see which one to use to render the URL. 

2. Uglier code, but use RouteValueDIctionary instead of anonymous types. So instead of: 

 



@Url.RouteUrl("Shoes-Canonical", new { controller = "School",  action, "Index" } )

Use this: 

 

@Url.RouteUrl("Shoes-Canonical", new RouteValueDictionary { { "controller", "School" }, { "action", "Index" } } )

 

There were several posts on StackOverflow that helped me come across this info. I read them a couple of days ago so I'm probably missing some of them now, after i've tried to retrace my google searches to get the links again: 

http://forums.asp.net/t/1335585.aspx

http://blog.whiletrue.com/2009/04/aspnet-mvc-performance/

http://stackoverflow.com/questions/212201/asp-net-mvc-url-generation-performance

 

On a related note, some additional routing optimizations done by the StackOverflow team: 

http://samsaffron.com/archive/2011/10/13/optimising-asp-net-mvc3-routing

Dec 27, 2011 at 6:03 AM
Edited Dec 27, 2011 at 1:34 PM
bertrandleroy wrote:

Nothing like that is planned at this point. Are you working on a site that would require it?

yes and no. The current site I'm working on where we've decided to use Orchard is brand new so we have no idea how busy it will get. I'm sure we'll be able to squeeze performance by tuning various things and improving hardware here and there for the first couple of years while it grows. 

 

The 2nd site isn't using Orchard yet, but it's current incarnation already employs such a strategy. We're in the planning stages of rewriting that site and I'm thinking about proposing Orchard as a replacement for the poor CMS implementation done there by my evil predecessors.

Dec 30, 2011 at 11:50 PM

Our team is working on some potentially high trafic projects on Windows Azure with Orchard. Therefore, would you be interested with our optimizations of the Azure FileSystem?

We did quite a lot of work on the Image Galery module, too, but it's outside the scope of this topic. Let's only say that the implementation is really perf (and cloud by the way) unfriendly. We will probably try to build an implementation of ICache for Windows Azure Caching and publish it as a module, too.

Coordinator
Dec 31, 2011 at 12:08 AM

Yes, please share your improvements on the FileSystem implementationm though I don't really see how it could affect the overall performance ...

The ICache would be welcome.

Coordinator
Dec 31, 2011 at 5:36 PM

Quick status on the current perf work.

I have tried to implement a correct db caching solution using NHibernate during two days. I stopped due to bugs in the NH version we are currently using. As I made some query optimizations, they happen to surface some issues with the NH cache implementation. Those issues have been resolved in the latest NH version though. The status is that this will be implemented when we upgrade NH version.

I also spent some time trying to remove unnecessary dependency resolutions. The most pain point was calling InjectUnsetProperties in the views, which was always resolving some dependencies even if not necessary. The gain is almost 100% in RPS on my machine on a vanilla homepage, SQL Server.

If someone wants to work on this subject some more, you can try to optimize how filters are injected in constructors during resolution. There's some great potential of improvement there. Don't know yet if the gain is due to not executing the filters, or from injecting them though. The code is in ShellContainerFactory line 102, trigerring FilterResolvingActionInvoker.

Dec 31, 2011 at 6:34 PM

Sebastien - have you looked into the changing the current way we are doing lazy loaded relationships?  I think I mentioned before that it is a big blocking point when it comes to optimizing the nhibernate caching.

Coordinator
Dec 31, 2011 at 9:50 PM

If you look at the changes I applied to the content query, then you'll see the parts are generally no more lazily loaded. There is also an option where you can ask the content query to reflect over a type's parts, then eager load them. It's reducing the queries by half.

The issue I am facing is with Query cache, because we use a DistinctRootTransformer and there are 3 known bugs with the version we are using right now.

Jan 4, 2012 at 8:44 PM
sebastienros wrote:

Yes, please share your improvements on the FileSystem implementationm though I don't really see how it could affect the overall performance ...

The ICache would be welcome.


It won't affect  the overall perf a lot in fact. But it's a game changer for any module relying heavily upon FileSystem access. We have added a local cache (lasting only for the current request) for file metadata access.  With the default FS implementation, each file access by a module leads to a synchronous request. It's not critical when accessing the local HD, but with Azure storage, it can be devastating performancewise.

For instance, a common use of the FS is first to enumerate the files in a directory, then accessing each file to get some metadata about it, and the content of only one file. In the current implementation, if N files are present in the directory, you get N+1 GET http requests, even if the required metadata are available through the first one.

It can get far worst depending upon the module implementation however. We performed this work when an image gallery  containing only 5 images. On a production server, page requests took more than 20 seconds.

I agree that module optimisation is critical here. However, adding some low level caching is a first step. If the requested file would be written or deleted by a concurrent request after cache creation, it could be an issue, but I would say that the best way for a module developper to handle this case is to avoid it.

OK for the ICache.

Coordinator
Jan 4, 2012 at 11:24 PM

Here's what the Umbraco folks have been up to recently: http://umbraco.com/follow-us/blog-archive/2012/1/4/umbraco-5-on-performance-and-the-perils-of-premature-optimisation.aspx

They're working on perf for 5.0, so this is definitely worth a read.

Feb 23, 2012 at 4:23 PM

Can i asked if there is something done with the DistrinctRootTransformer and the 3 known bugs? Any updates in 1.4 or later?

Coordinator
Feb 23, 2012 at 10:07 PM

Can you explain what the "three known bugs" are? What about DistinctRootTransformer? This sounds mysterious.

Feb 23, 2012 at 10:45 PM
sebastienros wrote:

If you look at the changes I applied to the content query, then you'll see the parts are generally no more lazily loaded. There is also an option where you can ask the content query to reflect over a type's parts, then eager load them. It's reducing the queries by half.

The issue I am facing is with Query cache, because we use a DistinctRootTransformer and there are 3 known bugs with the version we are using right now.

I think he was referring to this post.  I am curious as well, and was wondering what the status on Sebastien's efforts was.

Coordinator
Feb 23, 2012 at 11:01 PM

We postponed the NH cache integration because there are too many unsolved bugs in 2.1. So we need to move to NH 3.2. But NH 3.2 is not compatible with medium trust, we would be a major breaking change. 

Feb 27, 2012 at 5:53 PM

How about this ?

https://nhibernate.jira.com/browse/NH-2857

Maybe it would be enough for orchard ?

 

Coordinator
Feb 27, 2012 at 7:48 PM

Might be good then. But still need to ensure that the dynamic generation of part properties ysing DynamicMethods is compatible with MT.

Feb 27, 2012 at 7:52 PM

MT?

Coordinator
Feb 27, 2012 at 8:36 PM

Medium Trust

Mar 5, 2012 at 4:14 PM

I woul say: Drop Medium Trust or else something like a Orchard.MTData as a module which can be replaced by Orchard.FTData or someting

Mar 5, 2012 at 8:55 PM

I don't think that dropping medium trust would be a good idea. It's quite a compelling feature for Orchard. In my opinion, the team won't go this way.

I don't know well the nHibernate contribution process, but it should be possible to develop an internal patch, then to submit it?

Even if I don't like the idea of forking nHibernate 3, I don't think that it would be worse than keeping nHibernate 2, if the game is worth it (And far better than dropping MT)

Mar 16, 2012 at 3:07 AM

Sebastien,

 

We install with full trust in production.. Do you see us having any issues with upgrading to NHibernate 3.2?

 

Thanks,

-Venkat

Coordinator
Mar 16, 2012 at 8:12 AM

Yes, it won't work. Why do you need to?

Mar 16, 2012 at 1:53 PM
Hi Bertand

To take advantage of better query caching

In the earlier thread sebastien seemed to indicate the only issue was full trust can u let us know of any other challenges u know about


Sent from my iPhone

On Mar 16, 2012, at 3:12 AM, "bertrandleroy" <notifications@codeplex.com> wrote:

From: bertrandleroy

Yes, it won't work. Why do you need to?

Mar 17, 2012 at 7:32 PM
Edited Mar 17, 2012 at 7:41 PM

NHibernate 3.3.0 CR1 supports Medium Trust now, or at least it should because they fixed it https://nhibernate.jira.com/browse/NH-2857

Does this mean NHibernate 3.3.0 CR can be placed into the Core?

Before it was possible as well, ignoring the DI then u had to do something like this
http://puredotnetcoder.blogspot.com/2011/09/update-nhibernate-32-and-medium-trust.html

Please vote up http://orchard.codeplex.com/workitem/18566

Mar 23, 2012 at 10:32 PM

Does Orchard do 'Batch updates' somewhere? They are improved as well in NHibernate 3.2

http://fabiomaulo.blogspot.com/2011/03/nhibernate-32-batching-improvement.html

May 18, 2012 at 6:37 AM

Is there any plan to update NH libraries in near future ? I didn't noticed that in 1.5 roadmap. Is there any plan for this or deadline ?

May 26, 2012 at 2:24 PM

if we upgrade to the latest NHibernate version, will it be possible to use the database cache module correctly? This because I've seen strange things happen when this module is enabled (disappearing contentitems etc.) but it improves the performance a lot!

Coordinator
May 26, 2012 at 6:45 PM

Please use the Cache module instead of the DB Cache. This is the recommended solution. Upgrading to NH 3 will break medium trust, which we can't afford right now.

May 29, 2012 at 12:53 PM
Edited May 29, 2012 at 12:56 PM

According to this post, it is working in 3.3?

http://puredotnetcoder.blogspot.co.uk/2012/05/nhibernate-33-and-medium-trust.html

We are looking at getting 7 sites of Orchard running on the same box, we have gone the route of not doing Multi Tenancy in case we had to move the applications and/or databases. Currently our server is running 2 sites and sometimes struggles with the CPU, spiking 100% for pages once it is cached it is better. (Using Contrib.Cache).

Been looking at Contrib.RewriteRules as to why the spike, since it is probably running it on every url request? Is there caching?

I have been using Red Gate Performance Profiler to make our code as efficient as possible, the main slow part is the ContentManager displaying the page, the next thing I am wanting to invest in is NHProf, but being on 2.1.2.4000 I'm sure an upgrade would help a lot (and probably a pain for upgrading??) 

We are using ICacheManager as much as we can, so memory usage is high, but rather high memory than high cpu spikes. 

Do you have any "big" orchard sites, with complex sites, widgets, data, lots of parts, with filters etc? That you can test with, to try and optimize it? Using the MiniProfiler showing the same SQL statements going through helped a lot, I tried the Infiltrator module but it made the site near enough unusable. 

Also tried using Combinator and Cache on the site and they seem to not like each other, guessing because of the Caching mechanism isn't shared between servers.

Anything else I can mention/talk about?

Coordinator
May 29, 2012 at 6:20 PM

It's not NH itself which breaks medium trust, it's the fact that they removed ICriteria from it, in favor of QueryOver, which makes us use SyntheticExpression to simulate fake properties on ContentItemRecord for parts, so that they are available in queries.

We will remove support for Medium Trust at some point, the decision is only when, which links to hosting providers.

Jun 18, 2012 at 8:52 AM

Maybe this is of interest. Umbraco 5 has folded because of performance and maintenance issues. It seems notable, that their "avoid premature optimization" practice has failed.
http://stream.umbraco.org/video/6408801/codegarden-12-keynote

Here is some reasoning from Oren Eini.
http://ayende.com/blog/156577/on-umbracorsquo-s-nhibernatersquo-s-pullout#comments

I still wonder, why every CMS-project is trying to stuff dynamic data into a relational database. Wouldn't a document database like RavenDB be a better fit?

Coordinator
Jun 18, 2012 at 5:42 PM

Yes, a document db would be great in Orchard. RavenDb can't be used as we can't require our users to pay for a licence. Today they have the option for free databases like MS SqlCe or enterprise grade ones like MS Sql Server. Using MongoDb is not an option either because it doesn't handle transactions, which Orchard relies on heavily. Also, both make deployments harder as they would rely on installing a dedicated service.

But we are investigating some other solutions, which would solve all those issues, and the result is encouraging.

Jun 19, 2012 at 9:33 AM
Edited Jun 19, 2012 at 9:34 AM

Would using RavenDb require users to pay for a licence?  Unless I'm mistaken (quite possible) users can choose between using the AGPL licensed edition and the commercially licensed edition.  Since Orchard is open source, it can use RavenDB with the open source exception (http://ravendb.net/licensing) and users can choose then to run the edition of RavenDB best suited to them.  Am I missing something?

Coordinator
Jun 19, 2012 at 6:05 PM

When you put your website in production, then it's a commercial licence that you need.

Jun 25, 2012 at 4:12 PM

When performance profiling our Orchard site, (we are using Redgate's Ants Profiler), the main Hot area was the WidgetService for GetAllWidgets mainly being called from the WidgetFilter.

I'm pretty sure we can do a caching story around this and the layers, since the only time they'll be more Widgets or Layers will be when we add them. We can then signal the cache.

Just thinking of low lying fruits for performance, do more stuff with Profilers? 

Coordinator
Jun 26, 2012 at 7:29 AM

Sounds like a good one to try, yes.

Jun 26, 2012 at 9:49 AM

May I suggest testing performance with PLENTY of content to simulate high 'content' sites?

We ran into multiple issues so far that seem to be related to the fact that we have a lot of users (~35k) and that users in Orchard are also 'content'.

Coordinator
Jun 26, 2012 at 6:14 PM

I agree. We might want to create different kinds of websites and see how they behave. It's just a matter of time to spend on it. I do it on a case by case basis, and could benefit from such a website/db.

But sometimes it's also about testing specific modules, like Taxonomy for instance, with specific scenarios, so they are really case by case also. That's where you can help. For instance Sarkie you could help us on implementing a solution for widgets. And AimOrchard you could help us in testing key scenarios on huge volumes, looking for SELECT N+1 patterns. I can help you to implement the solutions.

Jul 24, 2012 at 2:24 PM
sebastienros wrote:

When you put your website in production, then it's a commercial licence that you need.

Sebastien, I believe you are wrong. RavenDB specifically caters for open source projects. The licensing page http://ravendb.net/licensing clearly states:

You can use Raven for free, if your project is Open Source. If you want to use Raven in to build commercial software, you must buy a commercial license.

So my understanding is if you use Orchard as-is along with any open source orchard modules you are permitted to use the free RavenDB license. However if you purchase or develop any proprietary closed sourced module(s) and integrate that with your Orchard deployment only then would you need to purchase the commercial license.

A good example would be the open source RaccoonBlog https://github.com/ayende/RaccoonBlog/ that utilizes the free RavenDB license.

Coordinator
Jul 24, 2012 at 2:48 PM

Well, we asked Ayende directly. You wouldn't be able to build commercial derivative work from Orchard without having to buy a commercial licence for RavenDB. Seriously, trust us, we wanted it to work. It won't.

Jul 24, 2012 at 3:29 PM

That's a bummer. Any idea what he classifies as "commercial derivative work" ? If you just used Orchard as-is to host your commercial company website surely that could not be considered a derivative work?

Jul 24, 2012 at 3:41 PM

Just as a side note, I started a thread on the RavenDB list before Bertrand replied, interesting response from Oren https://groups.google.com/forum/?fromgroups#!topic/ravendb/K3xtnKdkd00 which seems to comply with my interpretation, but I'll leave it at that.

Coordinator
Jul 24, 2012 at 5:27 PM

Let me explain further... Today the Orchard license (BSD) allows for pretty much anything, with attribution. This is a very conscious decision. We want people to be able to take Orchard and sell their own proprietary platform based on it. And they do. If we made RavenDB the persistence layer for Orchard, they could not do that anymore without paying for a commercial RavenDB license. Simple as that. We don't want this to happen. End of story. Unfortunately.

Ayende's blogging platform is a completely different story.

But don't worry, we have a better solution. Stay tuned.

Jul 24, 2012 at 6:01 PM
bertrandleroy wrote:

Let me explain further... Today the Orchard license (BSD) allows for pretty much anything, with attribution. This is a very conscious decision. We want people to be able to take Orchard and sell their own proprietary platform based on it. And they do. If we made RavenDB the persistence layer for Orchard, they could not do that anymore without paying for a commercial RavenDB license. Simple as that. We don't want this to happen. End of story. Unfortunately.

Ayende's blogging platform is a completely different story.

But don't worry, we have a better solution. Stay tuned.

Bingo, we'd have not chosen Orchard for the basis of our platform if there were any licensing costs associated with it.

We are doing more than just an Orchard CMS site in the cloud or a blog. :)

Jul 24, 2012 at 10:18 PM
bertrandleroy wrote:

But don't worry, we have a better solution. Stay tuned.

I'm intrigued. Are we talking optimization of Orchard's use of NHibernate or something more drastic?

Coordinator
Jul 24, 2012 at 10:22 PM
BeyersCronje wrote:

I'm intrigued. Are we talking optimization of Orchard's use of NHibernate or something more drastic?

Yes and Yes

Can't be satisfied with such answer don't you ?

Jul 24, 2012 at 10:25 PM

LOL, suppose that is a better answer than "No and No" :)

Coordinator
Jul 24, 2012 at 10:39 PM

Yes. But you can already play with NH changes we have done so far by using the NH3 branch from the source code, and enabling the SysCache module to use the second level caching.

Oct 7, 2012 at 7:14 PM
Edited Oct 7, 2012 at 7:17 PM

My interest in Orchard performance has reached a crescendo, due to the latencies I and colleagues are seeing.  And while there has been a fair amount of attention given to this topic, an element I think that is under-addressed is the JIT startup time.

I've placed a brand-new installation of Orchard 1.5.1 into an Azure web-site (technology preview).  No modules have been added and dynamic compilation is disabled.  The only "unusual" thing I've done is that I've activated multi-tenancy.  That said, when the app-pool times out it takes over 50 seconds for the site to come back up.  Once up, it takes perhaps a half-second to refresh a page (with cache cleared). 

For sites that receive only a hand-ful of visits daily, this 50+ second startup-time is a killer.  Though I've not (yet) done detailed analysis, I think it is fairly obvious this is due to the sheer size and quantity of the supporting libraries that Orchard depends on.  I can't help but think Orchard would benefit from going on a "dependency diet."  For example, replace log4net with the built in ETW / TraceSource capabilities in the BCL.  Replace NHhibernate with Massive or Dapper (ala StackOverflow).  Etcetera.  Yes some features would be lost, but I tend to think 50+ seconds startup trumps any other concern.

How does the community feel about startup time, and how to evolve Orchard to solve the problem?

Oct 7, 2012 at 7:51 PM
Edited Oct 7, 2012 at 8:41 PM

Have you tried setting the app pool timeout to zero? This could prevent or reduce the start-up time issues. 

I created a work item and attached a patch to configure the setting in Orchard Azure: http://orchard.codeplex.com/workitem/19106 

I'm not sure if this works on Azure websites. I made the patch before Azure Websites was introduced. 

Oct 9, 2012 at 12:51 AM

Hello Mystagogue,

I personally run multiple Orchard Tenants in azure web-site and do it very successfully. I will be glad to help if I can.

The one item that you must run to effectively use Windows Azure websites is "Keep Alive" module.  Here is the link to the module: http://gallery.orchardproject.net/List/Modules/Orchard.Module.Contrib.KeepAlive You will want to enable this module on the "base" site, the one that contains ".azurewebsites.net" in the url.  Make sure to also go to General Settings and check the "Enable keep alive behavior" and provide the url (should be the one that has ".azurewebsites.net" in it.  

I realize this does not address your question about startup time in Azure websites (or for Orchard in general) though I am hoping it helps you out. 

Coordinator
Oct 9, 2012 at 7:28 PM

50s is highly abnormal, especially for Azure. Maybe that instance is way too small? The warmup module may help, but there's something fishy here.

Oct 11, 2012 at 3:23 AM

In case anyone's interested I measured initial load on my test blog (default blog recipe running on an Azure extra small instance, using the 1.x tip, NOT using the patch I posted above). It takes 4.5 minutes for the initial load, but after that it's pretty snappy. I tested once last night and once tonight. 

I tried with a "small" instance a few months ago and it performed better. 

Oct 11, 2012 at 3:41 AM
Edited Oct 11, 2012 at 3:46 AM

i also have 50s, it is not azure.

it is GearHost CloudSites™

pls go to see http://www.realestateinfoexchange.com/

Coordinator
Oct 11, 2012 at 5:21 PM

That's insane. Either Extra-small Azure and GearHost CloudSites are grossly undersized and we should actively discourage people from using them with Orchard (I'm suspecting that's the case), or there is something else going on. In either case, we need to investigate that, I think. Can you file a bug to track this?

Oct 11, 2012 at 6:00 PM

http://orchard.codeplex.com/workitem/19129

Oct 11, 2012 at 6:55 PM
Edited Oct 11, 2012 at 6:58 PM

From my experience, Orchard performs well on an Extrasmall WebRole, but not so well with Azure websites, even on a non shared Small (or even medium) instance.

I didn't test it on an Azure VM, but if you are asking for feedback, I could test that rather fast.

The issue doesn't seem to be memory or CPU limited, therefore I thought it could be linked to the way Azure websites mount the site VHD from Azure Storage. However I wasn't able to perform Advanced troubleshooting: Websites is  rather opaque minded and the site had to be online as soon as possible.

I personally advice people willing to deploy Orchard instances on Azure to go the good old webRole way.

 

Oct 11, 2012 at 7:24 PM

Hello All,

Since I utilize Windows Azure websites, thought I would throw my hat in the ring and share my experience. Here is what I am running:

  • Web Site Mode: Reserved Mode
  • Instance Size: Small (1 core, 1.75 GB Memory)
  • Number of Instances: 1

I have had my share of issues with Windows Azure websites (you can read about it all over codeplex) but things are starting to come along now. I don't normally post sites but I feel there is value in doing it this time. Here are some I have implemented that are hosted on Azure websites and you can then be the judge of whether they are responsive enough (unfortunately, there is no way to show you startup time though I am hoping there is some value without this):

http://www.procarephysical.com/
http://www.procarephysicalfitness.com/
http://www.endlessmountainsolutions.com/

The first two are pretty database / imagery intensive sites while the last is a light weight site.  I use every thing I can to speed up performance, warmup, pinging service, caching, level 2 caching, combinator, and delayed image loading on the one site. Even as is, they are not quite as fast as I would like them - still working on that and open to ideas.

Perhaps it is an Azure Websites issue, perhaps it is an Orchard being database "chatty" issue, perhaps it is a module not quite optimized, or perhaps it is just me looking for things to be faster when they are fast enough - those are all tough items to decipher and thanks TheMonarch for posting the bug, I will be watching to see if others come up with anything. Hope this helps someone.

Oct 12, 2012 at 6:31 PM
TheMonarch wrote:

Have you tried setting the app pool timeout to zero? This could prevent or reduce the start-up time issues. 

I'm not sure if this works on Azure websites. I made the patch before Azure Websites was introduced. 


Ya, that doesn't work with Azure web-sites; setting app-pool timeout is not an option there.  I'd have to go web-role for that.

Oct 12, 2012 at 6:34 PM
bertrandleroy wrote:

50s is highly abnormal, especially for Azure. Maybe that instance is way too small? The warmup module may help, but there's something fishy here.


With Azure web-sites, you are not in control of instance sizes (processors, memory, size, etc).  As anothe poster stated, it is all opaque.  I believe under the covers they are providing multi-tenancy hosting.  As such, it is probably a very large, very powerful instance - but it is shared with any number of other invisible web sites.

Coordinator
Oct 12, 2012 at 6:55 PM

Really? So what's that doing? https://www.windowsazure.com/en-us/pricing/calculator/?scenario=full

Oct 12, 2012 at 10:43 PM

Azure websites is a distributed multitenancy hosting system. Your websites are located on VHDs on Azure storage.

The VHD are mounted on demand on the server as soon as the load balancer (ARR) receive the first request to your website.

Azure website provides a ftp endpoint able to update the VHD content.

You can have shared hosting for free, or dedicated hosting with the same associated cost as a classic cloud service instance.

Azure websites apparently use that http://www.microsoft.com/hosting/en/us/services.aspx with a customized ARR loadbalancer on frontend.

The fact is that this website (http://www.mediavox.fr ) performed very poorly on an azure websites dedicated small instance (unusable, even with caching and keep alive activated ), but performs well on a cloud service small instance. These 2 test case benefit from the same CPU and RAM.

I personnally suspect the Disk latency for small file read access during the initial compilation to be the bottleneck here: cloud services use a local HD whereas azure websites instances use a remote VHD.

Oct 12, 2012 at 10:53 PM

Thanks very much for posting this, Laere. I was wondering if there was a performance diff. Reserved websites are 2/3 the cost of web role for the same CPU/RAM, but it seems web role gives you better options if you need to do settings/config customizations or use a custom stack , and apparently better performance too. Soon the web roles will be portable to other hosts once they start supporting the Win Server 2012 stuff. 

Coordinator
Oct 12, 2012 at 10:56 PM

@Laere,

On WAWS please disable dynamic compilation if you want to get the same performance as on Cloud Service. The FS doesn't handle File System Watcher which Orchard uses a lot. You can do it by renaming config/Sample.HostComponent.config to config/HostComponent.config (Removing the Sample. portion)

And you can also use the Cache module to improve the overall perf.

Oct 18, 2012 at 12:05 AM
Edited Oct 18, 2012 at 12:08 AM

This discussion really helped me.  I recently created a Azure Website (Orchard 1.5.1) and found that the page load time was very slow (paid shared instance model).  That was OK for my beta period, but I knew I had to get it better before going "live".

I did two things that got the performance to a point that the page load time is just fine (2-3 sec).

1. Rename Sample.HostComponents.config to HostComponents.config

2. Install the KeepAlive module. 

The only downside is that, after doing this, I started to use a bit more CPU.    On average, I  believe the KeepAlive is using about 15 secs of CPU per hour (I deduced this because before installing KeepAlive I only had bursts of CPU usage, now I have a constant minimum of 15 sec/hour).  The additional cost is well worth the improved performance. -I am over my monthly quota so I am paying about 25cents a day.    I have a minimal site (about 10 pages)  and have enabled the following modules:  email, anti-spam, search, indexing, lucene, import/export and favicon. 

I wanted to share the stats in case it helps someone else.

Nov 25, 2012 at 10:49 PM

Hello guys,

Me and my team works with orchard only on Azure using hosted services or web sites for at least one year. Accoring to our experiense, the one and only reason to have a slow orchard service or web site on Azure is to have your CPU on a separate region from your database. Always keep them together and you will not have any issues. We can even run multiple tenants on extra small hosts without any problem (low traffic sites of course). Especially after the last orchard update (1.6) our web sites are even faster.

The only problem of course is that we have to disable the syscache module for Services that we host in multiple instances. Otherwise data is getting corrupted.

Back to the main issue, cpu and database hosts have to be in the same region. You can do the test on your own, connect with a remote desktop on a cpu that is at the same region with your database and do the ping (around 100 ms) ! compare it with a database server that is hosted in another region, 4-5 times slower (400+ms).

People who have slow responsive sites, check where your database is.  

Nov 26, 2012 at 9:14 PM

I am discovering this interesting thread.
Just to put my small point concerning DB optimization.

1) It need to be applied on each DB model, on the model dependant DB engine

2) For SQL server from the beginning of this product, optimizing means

- use store proc and optimize them using plans regularly, adding/removing indexes according

- use read uncommited and apply a short deadlock and retry policy

- have external keys but not too much

- rather use if possible de-normalized tables

- raise the DB cache level, fixing main indexes in memory

80% of this is totally unusable with NHibernate....

Jan 3, 2013 at 12:46 PM

i think HTML5 Offline Web Applications --http://www.w3schools.com/html/html5_app_cache.asp

can improve perf.

and there is a way to create this kind file on the fly, like sitemap

May 12 at 11:56 AM
If someone is interested in reducing the number of DB requests, I implemented the idea of loading the ContentItem plus its related ContentParts by one database query. You can find the module here. If anyone has an idea to extend it, please let me know.