Orchard on Web Farm

Topics: Installing Orchard
Oct 10, 2013 at 3:30 PM
I have seen a number of posts surrounding the use of Orchard on a web farm and keeping folders synchronised between the servers. I was interested in the article


which states that that the azure module can be used on any hosting environment to store the settings.txt information in the database. I was wondering if anyone has done this and how this was working?

The other option is to write a module that persist the settings information to the database by implementing IShellSettingsManager which seems a reasonable solution. Has anyone written such a module and how did this turn out?
Oct 10, 2013 at 5:25 PM
You can't save the settings in the database, because the database settings are in the settings.txt ;) So how will you connect to the database ?
You should instead create a ShellSettingsManager which would store them in a common place, like a shared folder. Azure blob storage is already implemented, you need to change Host.config to use this specific implementation, take a look at the one in the Azure folder.
Oct 10, 2013 at 8:05 PM
The documentation you refer to does not state that the module can be used to store Settings.txt in "the database" as you are suggesting, rather it can be used to store Settings.txt in Windows Azure Blob Storage.

And yes, we are using this feature in production, and yes, it works fine ;)

You could of course write an IShellSettingsManager implementation that stores Settings.txt information in a database - you just can't configure the connection string to that database in Settings.txt, because well... chicken and egg. You would have to configure it through some other means, such as CloudConfigurationManager (which is in fact what the blob storage implementation uses to store the storage account connection string).
Oct 11, 2013 at 8:20 AM
Thank you for clarifying Decorum. It did seem a little weird using something with 'Azure' in the name for on-premise :-)! I think the documentation could be more specific as there is a sentence 'It is also possible to use this feature is any other hosting environment where you have a server farm with multiple nodes but no shared file system. To do this you need to do the following' which could be misleading.

On the matter of storing the settings in the database, I take the point of chicken and egg. You could specify a connection string in the appSettings as the initial 'seed' and use this to read all the settings items from a specific table. I can see some other challenges with this however:
  • When you add a new tenant, you still have the issue of how to ensure this is picked up on the other servers in the farm. The same would be true if you make any changes. This would also have to be in a common place and checked regularly
  • If you currently store the tenants in the file system, you need to migrate these.
  • I am sure there are others!
Oct 11, 2013 at 10:30 AM
I think you may have misunderstood - the sentence in the documentation you refer to is correct and not the slightest bit misleading imo. The feature is indeed intended to be used to store Setting.txt in Azure blob storage regardless of hosting. As you know, Azure blob storage is a REST-based service with public endpoints that can be called from anywhere, so it's a good option for getting shared settings storage among your instances no matter if they are in Azure, Amazon, some other cloud provider, or on-premise for that matter.

Regarding your idea of having the connection string in appSettings, yes this is exactly what the blob storage implementation does, except it uses CloudConfigurationManager to make it hosting agnostic (reads either from Azure cloud service configuration or appSettings depending on which is available). There is logic in the Orchard.Azure module to also have these settings overridden by tenant by prefixing them with "TenantName:" - you would need to do the same in your implementation for multi-tenancy support.

Lastly, the problem you mention about other instances not picking up tenant changes, this problem actually already exists with the blob storage implementation also. If you make changes to Settings.txt (or add a new Settings.txt for a new tenant) trough one instance it does not automatically get picked up by all instances in the farm - you will have to go and restart them manually. There has been some discussions about maybe implementing some kind of cross-instance signalling system that, among other things, could signal other instances to restart their tenants when these kinds of changes are made. But nothing has been implemented yet.
Oct 11, 2013 at 5:06 PM
Agreed. The whole 'BindSignal' mechanism would need to be modified to trigger a refresh in the other Orchard nodes when shell setting changes are made.

You do need to look at the shell settings table each time and see if any of the date modified dates have been changed. But this seems like an overhead for this solution.

Are you thinking that when you setup Orchard within a web farm, you could have a module which stores a list of all the web servers in that farm. When a tenant is changed, just async a call to each web server and you're done.

It would be good to publish this into a 'Load Balanced Shell settings' module.

Oct 12, 2013 at 2:17 PM
Well... having to maintain a list of farm nodes in Orchard is not a good solution imo. Rather, the discussions so far has been to build this around Azure Service Bus, and just direct all nodes at the same service bus endpoint and topic for more of a publish/subscribe approach. That way, no node needs be aware of any other nodes or how many, just publish a message to a queue whenever tenant changes are made, and listen to the same queue and do a tenant restart whenever such a message is received.

I also don't think it would be good to mix this with shell settings - ideally imo they would be completely separate and unaware of each other. I'm thinking something like an additional feature that simply a) intercepts tenant restarts in the own node and publishes a signal whenever one happens and b) listens for such signals from other nodes and performs a tenant restart in the own node in response. This feature needn't concern itself with the reason or origin of the tenant restart; tenant setting changes would be just one potential situation when this occurs.