This project is read-only.

Feature Team: Deployment Module

Topics: Core
Jul 31, 2013 at 6:21 PM
The goal of this thread is to propose a base design for a new module enabling a user to exchange content, assets and modules between two Orchard tenants, local or distant. Some effort has already been done by Damien Clarke, and this proposal is here in order to extend what he already did and make it more extensible. Feel free to approve or complete this proposal.

Deployment Plans

Deployment plans are created in order to aggregate the logics that will comprise all the contents, assets and modules which have to be pushed to another instance. Deployment plans are created in the Admin UI by configuring specific aggregators.

Deployment plans can be scheduled to be executed at a specific time, or repeatedly.

The result of a deployment plan execution is called a deployment package, and will probably be a Nuget package under the hood.


These are some features defining how to grab contents, assets and modules. The result of their execution will be integrated in a given deployment plan. It is extensible as any developer might have different requirements to retrieve the elements of deployment plans.

Examples of implementations:


Would let a user select which content items have to be included in a plan, using a content picker. Would also provide a part to let attach a specific content item to a bundle.


Include the result of a query to a plan. For instance, all content which have been modified since last week, or since last successful deployment of a specific plan.


Select which files to include in the plan.


Select which modules to include in the plan. It can also include themes as themes are modules. It could also be separate in order not to add any confusion.


Adds specific metadata information to the plan, for instance to update content types on the target environment.

File Diff

If enabled on the other instance, can define which files have to be updated and add them to the plan.


Services are responsible for pushing and pulling deployment packages. Services have to be extensible and configurable as they will provide specific ways of communication between environments.

Examples of implementations:


Uses WebAPI via HTTP ports to push data to an environment. It will require secured authentication via private key, and the system to have known HTTP endpoints. It can represent a security threat/concern for production environment. This is a PUSH service, as the initial environment will initiate the deployment.


Uses a GIT repository to store temporarily the deployment package, so that different target environment can pull it. This is a PULL service as the target environments have to poll the GIT repository. Optionally they can receive a HTTP signal letting them know a package is available.
The same logic can be replicated for any kind of storage.

This technique is more secure than the REST service as both the source and the target environments will connect to the repository, and don't need any ingoing port to be open.


Processors will behave as filters on a package, letting a user add custom post processing logic to a package, for instance resolving dependencies, validating a package, reorganizing, …

Other scenarios enabled:

  • Syncing two instances by setting up reverse deployment plans. Can even sync two tenants, one being used for admin and the other for front end website only.
  • Deploying content after a user approves a content using the Workflows module
Jul 31, 2013 at 7:59 PM
What's the difference between diff and asset? Isn't asset a special case of diff?

A package can combine several aggregators, right?

"Services" sounds too vague. How about making it more specific, like "Deployment Service" or "Deployment Protocols"? Same thing for processors, we should be specific in the technical names.

You talk about synchronization. Do we really want bi-directional sync? That sounds complicated. How do people feel about this? Would one-directional deployment be enough? Please answer with real scenarios in mind and not just "would be nice" arguments ;)
Jul 31, 2013 at 9:14 PM
Edited Jul 31, 2013 at 9:31 PM
We use a kind of two way synchronisation with our current CMS (which will be replaced by Orchard). We use D -> T -> A -> P-staging -> P for applications (modules) and P-staging-> P -> A/T/D for content (data).
Jul 31, 2013 at 10:14 PM
I agree with @BertrandLeRoy about sync. Dev to stage and stage to live is enough.
Aug 1, 2013 at 1:02 AM

Asset lets you select specific files.
Diffs automatically select all the necessary files.

A deployment will run all the aggregators and output their results into a package.

Sync is provided out of the box, not as a first class choice but a possible config: Setup an aggregator which would get all users, on both sides, and a deployment plan targeting both sides. Run them, all users are in sync. This is a scenario which has been requested for sharing user accounts between two tenants for instance. We have nothing specific to implement, and there is nothing we can do to prevent users from doing it, if it works.
Aug 1, 2013 at 6:21 AM

For me a sync feature is useful in the following scenario:
Our objective is our technical staff (non developers) will be able to admin orchard sites: enable/disable features & modules, edit and copy/paste templates, and edit placement file.
In order to do so each member of the staff should access to a tenant of the stage orchard server and will need to sync it with the production one. After they check its changes are ok they needs to move added/ edited modules/ contents / templates to production.
If they achieve what they want everything is all right. More complex scenario comes when they need to undo their changes cause it is not possible what they tried (remove contents, content-type, undo template changes and addings. However if they could sync stage tenant with production tenant everything would be easier.
Aug 1, 2013 at 6:49 AM
Yes, multiple staging environments sounds like a good scenario for this. Thanks.
Aug 1, 2013 at 9:54 AM
Edited Aug 1, 2013 at 9:54 AM
From experience on MS CRM (which is a kind of dedicated CMS), before each deployment the system atomatically backup itself (meta data and data) before applying imports (backup is simply an export of full system, not a DB backup), this in order to rollback reapplying the backup.

I have already raised this point, I think there is a need for versioning each export with the actual version of part and exporting/import method (part and way of exporting it are necessarly in sync), this will avoid applying data on non adapted classes/mappings.
Aug 1, 2013 at 3:26 PM
I can include the REST and Metadata services work in the Orchard.Api module I am working on, as I have the basis for that along with extensibility in defining authentication schemes (Basic, OAuth, etc. via "authorization providers") for securing the REST API for content items, types, parts, and fields.

Calls can be added to allow for synchronization-specific support. Let me know if this would be helpful and in what context...
Aug 1, 2013 at 8:18 PM
Is it unrealistic to consider the possibility of including updates for the whole platform in a deployment package, instead of just modules and contents?
Aug 8, 2014 at 7:21 PM
Linking to Damien Clarke's module: