This project is read-only.

Locking in RoutablePathConstraint

Topics: Troubleshooting
Apr 13, 2011 at 7:57 AM
Edited Apr 13, 2011 at 8:09 AM

So iv'e been mulling around the Orchard codebase and i ran into the RoutablePathConstraint class.

I understand what its for, however there is some locking going on in there and its really hurting performance. Your using the normal lock(syncLock) pattern and because of that if u have the problem that when multiple requests are coming in, because this RouteConstraint is hit on every page request, they are contending with each other for that lock.

This really limits the total amount of Requests Per Second that an Orchard site can churn out.

In short: If u cant eliminate the locking thats going on there, why aren't you using an ReaderWriterLockSlim ? This allows multiple readers to access the constraint in an non blocking way..

Disclaimer: I might be missing something here, but afaik the IRoutablePathConstraint is a Singleton, so this should be a valid point.


Edit: or if you are going for performance, using the monitor class is even faster.

Jun 1, 2011 at 2:12 PM

I've just been looking at this as well. I'm implementing my own routing mechanism; e.g. items appearing on multiple routes and in different display states, having base paths (i.e. /products/*) as well as extensibility hooks to mess with things even more.

So initally I was following the example of Routable, and the more I've seen what's going on, the more I'm certain that the locking pattern could *already* be hurting performance even on a small site; and for a large site with a lot of content and users, things could get really bad.

I'd like to implement a better locking strategy; first of all in my component, and then if it's satisfactory and performs well, apply the same strategy to Orchard's routing and submit a contribution.

Initially I'll just detail more what's actually happening at the moment.

Firstly; there are in fact two Path Contraints. There's RoutablePathConstraint, and ContainersPathConstraint. ContainersPathConstraint has no locking. They are both populated by an IBackgroundTask. This obviously happens fairly immediately at startup; so it seems that all routes have to be compiled before the application can serve any requests. Since compiling the routes involves reading content items from the DB, this will take a very long time for a big site containing complex content items with lots of parts! I think this might be the cause of issues with startup times in some cases. I realise the Warmup module is there but that's fairly limited, you still have to wait for startup to finish e.g. to login.

Secondly; here is the code from RoutablePathConstraint which is called from the IBackgroundTask:


        public void SetPaths(IEnumerable<string> paths) {
            // Make a copy to avoid performing potential lazy computation inside the lock
            var slugsArray = paths.ToArray();

            lock (_syncLock) {
                _paths = slugsArray.Distinct(StringComparer.OrdinalIgnoreCase).ToDictionary(value => value, StringComparer.OrdinalIgnoreCase);


As you can see, the entire paths dictionary is locked while first a Distinct() and then a ToDictionary() is called on the in-memory array. I don't know how often the background task runs, but clearly this means that every time it does, the site will server no requests whilst that dictionary is being manipulated. An obvious optimisation here would be:


        public void SetPaths(IEnumerable<string> paths) {
            var dict = paths.Distinct(StringComparer.OrdinalIgnoreCase).ToDictionary(value => value, StringComparer.OrdinalIgnoreCase);
            lock (_syncLock) {
                _paths = dict;

Note that I was able to remove the ToArray() line because there is now no computation inside the lock! I don't know how much overall impact this one change would have but routing is fairly critical and any performance improvement is a good one, right? ;)

Something else about the above: it's using twice as much memory as needed. The Dictionary doesn't need to store the path, since it's the same value as the key anyway. So that's a redundant duplication of the paths. Why not store the content item Id instead? (Which would then save the ItemController having to query the database, it could just get the item Id straight away)

Third; AddPath and RemovePath methods from RoutablePathConstraint:

        public void AddPath(string path) {
            lock (_syncLock) {
                _paths[path] = path;

        public void RemovePath(string path) {
            lock (_syncLock) {
                if (path != null && _paths.ContainsKey(path))

These are called from Handlers when content is saved or removed.  They modify the dictionary to accomodate any changes to paths, or new paths from created content. So what is the point of populating the constraint during IBackgroundTask.Sweep? It should always be up-to-date via these methods so why ever rebuild it?

I definitely think a ReaderWriterLock will improve things a lot. It'll mean that the constraint is only unreadable very briefly whenever content is saved. As to how to mitigate the startup problems this could be causing, the best answer is probably an additional table of routes that can be read very quickly without having to hit ContentManager.

Monitor also looks very appropriate; I'll have a play with some of these and see what I find.

Does anyone have any further thoughts or experiences to share on this topic?

Jun 1, 2011 at 6:24 PM
Edited Jun 1, 2011 at 6:25 PM

First off. Wow a response. Its been so long since I posted this I almost forgot about it.

It took me quite some re-digging to verify a few things but ultimately i agree with you on all points. Btw the Blog Module uses its own BlogPathConstraint not sure if it suffers from the same issues though.

Jun 1, 2011 at 6:56 PM

As always with perf, no claim is valid without numbers. I'm not saying that there is nothing to this, but we can't act on just staring at the code, I hope you'll understand. Now if there are numbers and a patch that shows improved numbers we'll be more than happy to take a patch. It so happens that this has not shown in profiling data as a bottleneck so far, and as far as I know.

Jun 1, 2011 at 7:03 PM

Ill do a wcat session tomorrow.

Jun 3, 2011 at 11:24 AM

Ok my development machine is completely hosed. Im still planning to try and get some meaningful stats, it will probably be somewhere next week.