Checklist for Migration of Web Application from Traditional Hosting to Cloud

In 2010, Cloud Computing is likely to see increasing adoption. Migrating Web applications from one data center to another is a complex project. To assist you in migrating Web applications from your hosting facilities to cloud hosting solutions like Amazon EC2, Microsoft Azure or RackSpace’s Cloud offerings, I’ve published a set of checklists for migrating Web applications to the Cloud.

These are not meant to be comprehensive step-by-step, ordered project plans with task dependencies. These are checklists in the style of those used in other industries like Aviation and Surgery where complex projects need to be performed. Their goal is get the known tasks covered so that you can spend your energies on any unexpected ones. To learn more about the practice of using checklists in complex projects, I recommend the book Checklist Manifesto by Atul Gawande.

Your project manager should adapt them for your project. If you are not familiar with some of the technical terms below, don’t worry: Your engineers will understand them.

Pre-Cutover Migration Checklist

The pre-cutover checklist should not contain any tasks that “set the ship on sail”, i.e. you should be able to complete the pre-cutover tasks, pausing and adjusting where needed without worry that there is no turning back.

  • Set up communications and collaboration
    • Introduce migration team members to each other by name and role
    • Set up email lists and/or blog for communications
    • Ensure that appropriate business stakeholders, customers and technical partners and vendors are in the communications. (E.g. CDN, third-party ASP)
  • Communicate via email and/or blog
    • Migration plan and schedule
    • Any special instructions, FYI, especially any disruptions like publishing freezes
    • Who to contact if they find issues
    • Why this migration is being done
  • Design maintenance message pages, if required
  • Setup transition DNS entries
  • Set up any redirects, if needed
  • Make CDN configuration changes, if needed
  • Check that monitoring is in place and update if needed
    • Internal systems monitoring
    • External (e.g. Keynote, Gomez)
  • Create data/content migration plan/checklist
    • Databases
    • Content in file systems
    • Multimedia (photos, videos)
    • Data that may not transfer over and needs to be rebuilt at new environment (e.g. Search-engine indexes, database indexes, database statistics)
  • Export and import initial content into new environment
  • Install base software and platforms at new environment
  • Install your Web applications at new environment
  • Compare configurations at old environments with configurations at new environments
  • Do QA testing of Web applications at new environment using transition DNS names
  • Review rollback plan to check that it will actually work if needed.
    • Test parts of it, where practical
  • Lower production DNS TTL for switchover

During-Cutover Migration Checklist

  • Communicate that migration cutover is starting
  • Data/content migration
    • Import/refresh delta content
    • Rebuild any data required at new environment (e.g. Search-engine indexes, database indexes, database statistics)
  • Activate Web applications at new environment
  • Do QA testing of Web applications at new environment
  • Communicate
    • Communicate any publishing freezes and other disruptions
    • Activate maintenance message pages if applicable
  • Switch DNS to point Web application to new hosting environment
  • Communicate
    • Disable maintenance message pages if applicable
    • When publishing freezes and any disruptions are over
    • Communicate that the Web application is ready for QA testing in production.
  • Flush CDN content cache, if needed
  • Do QA testing of the Web application in production
    • From the private network
    • From the public Internet
  • Communicate
    • The QA testing at the new hosting location’s production environment has passed
    • Any changes for accessing tools at the new hosting location
  • Confirm that DNS changes have propagated to the Internet

Post-Cutover Migration Checklist

  • Cleanup
    • Remove any temporary redirects that are no longer needed
    • Remove temporary DNS entries that are no longer needed
    • Revert any CDN configuration changes that are no longer needed
    • Flush CDN content cache, if needed
  • Check that incoming traffic to old hosting environment has faded away down to zero
  • Check that traffic numbers at new hosting location don’t show any significant change from old hosting location
    • Soon after launch
    • A few days after launch
  • Check monitoring
    • Internal systems monitoring
    • External (e.g. Keynote, Gomez)
  • Increase DNS TTL settings back to normal
  • Archive all required data from old environment into economical long-term storage (e.g. tape)
  • Decommission old hosting environment
  • Communicate
    • Project completion status
    • Any remaining items and next steps
    • Any changes to support at new hosting environment

The checklists are also published on the RevolutionCloud book Web site at www.revolutioncloud.com/2010/01/checklists-migration/ and on the Checklists Wiki Web site at www.checklistnow.org/wiki/IT_Web_Application_Migration

Using Amazon Elastic Block Store (EBS) with an EC2 Instance

Amazon AWS Logo

One of the differences between Amazon EC2 server instances and normal servers is that the server’s local disk storage state (i.e. changes to data) on EC2 instances does not persist over instance shutdowns and powering on. This was mentioned in my earlier post about hosting my Web site on Amazon EC2 and S3,

Therefore, it is a good idea to store your home directory, Web document root and databases on an Amazon EBS volume, where the data does persist like in a normal networked hard drive. Another benefit of using an Amazon EBS volume as a data disk is that it separates your operating system image from your data. This way, when you upgrade from a server instance with less computing power to one with more computing power, you can reattach your data drive to it for use there.

You can create an EBS volume and attach it to your EC2 server instance using a procedure similar to the following.

First, create an EBS volume.

You can use Elasticfox Firefox Extension for Amazon EC2 to:

  • create a EBS volume
  • attach it to your EC2 instance
  • alias it to a device, In this example, we use /dev/sdh

Then attach the “disk” to your EC2 instance and move your folders to it using a procedure similar to the following commands issued from a bash shell.

# Initialize (format) the EBS drive to prepare it for use
# Note: replace /dev/sdh below with the device you used for this EBS drive
mkfs.ext3 /dev/sdh
#
# Create the mount point where the EBS drive will be mounted
sudo mkdir /mnt/rj-09031301
# Side note: I use a naming convention of rj-YYMMDDNN to assign unique names
# to my disk drives, where YYMMDD is the date the drive was put into service
# and NN is the serial number of the disk created that day.
#
# Mount the EBS drive
sudo mount -t ext3 /dev/sdh /mnt/rj-09031301
#
# Temporarily stop the Apache Web server
sudo /etc/init.d/apache2 stop

#
# Move the current /home folder to a temporary backup
# This temporary backup folder can be deleted later
sudo mv /home /home.backup

#
# Symbolic link the home folder on the EBS disk as the /home folder
sudo ln -s /mnt/rj-09031301/home /home
#
# Start the Apache Web server
sudo /etc/init.d/apache2 start

Limitations:

One current limitation of EBS volumes is that a particular EBS disk can only be attached to one server instance at a given time. Hopefully, in a near future version upgrade of EC2 and EBS, Amazon will enable an EBS volume to be attached to multiple concurrent server instances. That will enable EBS to be used similar to how SAN or NAS storage is used in a traditional (pre cloud computing era) server environment. That will enable scaling Web (and other) applications without having to copy and synchronize data across multiple EBS instances. Until Amazon adds that feature, you will need to maintain one EBS disk per server and keep their data in sync. One method of making the initial clones is to use the feature that creates a snapshot of an EBS volume onto S3.

Related article on Amazon’s site:

This Web Site is Now Hosted on Amazon EC2 & S3

This web site, www.rajiv.com is now hosted on Amazon.com’s Elastic Compute Cloud (EC2) and Simple Storage Service (S3) services. They are part of Amazon Web Services offerings. If you are a technologist, I recommend EC2 and S3. To learn more about them, you can follow the links in this article.

Benefits of hosting a Web site on EC2 & S3

  • The hosting management is self-service. Anytime you want, you can provision additional servers yourself and immediately. Unlike with most traditional hosting companies, there is not need to contact their staff and have to wait for them to set up your server. On EC2, once you have signed up for an account and set up one server, you can provision (or decommission) additional servers within minutes. Even the initial setup is self-service.
  • EC2 enables you to increase or decrease capacity within minutes. You can commission one or hundreds of server instances simultaneously. Because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs. Billing is metered by an hour as the unit. This flexibility of EC2 can benefits many use cases:
    • If your web sites get seasonal traffic (e.g. a fashion site during shows) or can temporarily get much higher traffic for a period of time (e.g. a news site), EC2’s business model of pay for what you use by the hour, is cost-effective and convenient.
    • If yours is the R&D or Skunkworks group at a large or medium size organization or a startup company with limited financial resources, renting servers from EC2 can have many benefits. You don’t have to make a capital investment to get a server farm up and running, nor make long-term financial commitments to rent infrastructure. You can even turn off servers when not in use, greatly saving costs.
  • It allows me to use the modern Ubuntu1 GNU/Linux operating system, Server Edition. Among Ubuntu’s many benefits are its user friendliness and ease of use. Software installations and upgrades are a breeze. That means less time is required to maintain the system while retaining the flexibility and power being a systems administrator gives.
  • EC2 has lower total cost ownership for me than most hosting providers’ virtual hosting or dedicated server plans. Shared (non virtual server) hosting is still cheaper, but no longer meets my sites’ requirements.2

Potential drawbacks/caution with EC2 & S3

  • While S3 is persistent storage, EC2 virtual server instances’ storage does not persist across server shutdowns. So if your web site is running a database and storing files on an EC2 instance, you should implement scheduled, automated scripts that regularly back up your database and your files to S3 or other storage.
    • Consistent with what I read in some comments online, my EC2 virtual server instance did not lose its file-system state or settings when I rebooted it. So rebooting seems to be safe.3
    • This potential drawback is arguably a good thing in some ways. It compels you to implement a good backup and recovery system.
    • This also means that after installing all the software on your running Amazon Machine Image (AMI), you should save it by creating a new AMI image of it as explained in the Creating an Image section of the EC2 Getting Started Guide.
      • This is an issue since you may want to do this every time after you update your software, especially with security patches. Until Amazon implements persistent storage for EC2 instances, you could do this monthly. You can script this to be partly or fully automated. Since Amazon’s EC2 instances are quite reliable, this is not a major concern.
  • An EC2 instance’s IP address and public DNS name persists only while that instance is running. This can be worked around as described under the tech specs section below.

Some articles about Amazon’s hosting infrastructure services:

Tech specs of my site:

  1. www.ubuntu.com []
  2. I plan to split rajiv.com into separate sites, The India Comedy site will move to comedy.rajiv.com and the SPV Alumni site will move to spv.rajiv.com. The latter two are community sites and will benefit from a community CMS like Drupal. []
  3. However, please be aware of a known issue that on some occasions caused instance termination on reboots. []
  4. I created my AMI virtual machine by building on top of a public Ubuntu AMI by Eric Hammond. []