Tag Archives: cloud

Hosting Large-Scale Web Sites: Contract Review Guide for the CTO

If you host and operate large-scale Web sites, or negotiate contract agreements with vendors that provide such services, you need to understand what should be included in a Web hosting infrastructure. This knowledge will help you in three areas:

  1. Providing reliability, scalability & good performance
  2. Minimizing risks via security, privacy, regulatory compliance and reduction of vulnerability to potential lawsuits
  3. Reducing and controlling costs

This guide is meant to help you review upcoming contracts as well as existing services.

Likely audience for this article: Managers, directors and vice presidents of technology, operations or finance at organizations operating large-scale Web sites; Executives supervising technology: CTO, CIO, CFO, COO.

Seven Aspects of Large-Scale Web Hosting

Large-scale Web hosting infrastructure and services can be organized into the following seven areas:

  1. Servers & Environments
  2. Network & Other Appliances
  3. Managed Hosting Services
  4. Third-party Provided Services
  5. Program Management Office, PMO
  6. Account Management
  7. Infrastructure & Facilities

Checklist for Review

You can use the following checklist to review your hosting services or a vendor’s proposal.

What to look for

When you review each item below, consider:

  • Is this item included in the vendor’s proposal or in the services we are currently receiving? If it is not included, what are the good reasons it isn’t included?
  • Is this needed for my organization’s current business requirements? Can we do without it? Is it a must have or nice to have for present and reasonable future needs?
  • What are the alternatives?
  • What is the unit price of this item? How does the price scale up as needs grow? How does the price scale down when need for this item decreases?
  • What level of fault-tolerance does this item need? i.e. redundancy, standby backups, time to recover

Some of the above review questions may apply only to things and not apply to services and processes.

Servers

Servers may be physical hardware servers and/or virtual servers managed using software such as VMWare, Parallels Virtuozzo or Xen. The services listed below can each run on separate servers or multiple services can run on a server. It is generally better to have servers running only one (or minimum number) of the major services listed below. That reduces complexity and saves expensive staff time saved maintaining, troubleshooting and recovering. Virtualization makes it economical to have multiple virtual servers on the shared physical hardware economize costs.

The following is a list of commonly found services at large-scale Web sites that require servers.

  • Web
  • Application
    • Content Management software. This is the software that the Editorial and Production teams use to submit, edit, package and manage articles, photos and other Web site content
    • Dynamic Content Assembly. Typically done using Portal Server software, either third-party supplied or in-house developed
    • Data Processing. E.g. workflow engines, jobs/tasks processing servers
    • Middleware
    • Other applications. These are applications that happen to be separate from the main content management system. They could be separate for any number of reasons. E.g. blogs, forums
  • Database

Server Environments

An environment is a self-sufficient set of servers assigned to serve a purpose as described below. Large-scale Web sites typically utilize multiple environments.

  • Production
    • This serves the Web sites to the customers and public.
    • Typically has 99.9% or higher uptime guarantee in the Service Level Agreement
      Please refer to the table below titled Understanding SLA Uptime Guarantee Percentages to compare different time windows when the SLA Uptime measurement gets reset. I recommend that you ensure that the reset window you get is the same duration as your billing cycle (usually monthly) or shorter. This will help avoid having long downtimes without penalty.
  • Staging
    • This is the environment where content packages are developed, integrated and previewed by Editorial, Design and Production teams before they are published to the end-users. For example, when working on a major site redesign or relaunch for several months. Since the tech teams are often making changes to the Development Integration and QA environments, they are not suitable for content integration work by the Editorial and Design teams. Staging is used in large-scale Web sites where mutiple Editors, Designers and Production staff are collaboratively creating content packages and new sections. In smaller Web sites or in cases where just one or two Editors are working on a piece of content like an individual article, previewing is done in the Production environment itself with access controls.
  • Quality Assurance (QA)
    • The QA engineers perform Functional Testing and Load Testing here. Doing functional testing while a load test is running is sometimes a good idea as it simulates usage closer to live production.
  • Development Integration
    • Software product code developed by different engineers is integrated here. There could be continuous integration or nightly builds.
    • This is where developers ensure that their code works with other developers’ code (does not break the build, and does not conflict resulting in undesired functionality)
    • Programmers should ensure that the product works here before handing it off to the QA engineers for testing

In a virtualized system the environments may not be physically separate and may regularly grow and shrink at different times. For example when hosted at a cloud computing provider, the QA environment may scale up during load testing and shut down completely during the hours the QA team is not working.

Network & Other Appliances

These are devices to which various servers are directly or indirectly connected.

Managed Hosting Services

  • Systems Administration
    • This typically includes all the management of the physical hardware up to and including the operating system and popular applications that complement the operating system.
  • Database Administration Services
  • Applications Management Services
    • This typically includes all the administration of the applications that run on top of the operating system.
  • Systems Monitoring, Alerting & Reporting
  • Web Support Help Desk, 24×7

Third-party Services

Program Management Office, PMO

  • Project Management
    • PM people, organization, processes
    • Collaborative project management tools, e.g. JIRA, RallyDev, Mingle
    • Shared documentation management tools, e.g. Wiki
  • Change Management Processes & Tools
    • Documentation system
    • Tools for source control, build & deployment
  • RASIC Matrix Describing Roles & Responsibilities
  • Escalation Flowcharts
  • Crisis Management & Emergency Procedures

Account Management

  • Customer service
  • Relationship management
  • Master Services Agreement, MSA
  • Statements of Work, SOW
  • Service Level Agreement, SLA
    • What to look for in the SLA is the subject of a separate article in this series.
  • Billing
    • Monthly bills provided by telecommunications (telco) and hosting companies tend be extremely complex and lengthy. As a result, they are difficult and time-consuming to review.
    • Always factor in one-time setup fees and any implementation fees paid to the vendor and/or their partners in the total cost of the contract. Don’t look only at the recurring charges. A simple way to do this is this:
      contract cost = implementation fees + (estimated recurring fees x number of recurrences committed to)
      e.g. contract cost for 1 year = setup fees + (estimated monthly charges x 12)
      For most hosting / telco contracts I recommend this simple calculation over more sophisticated methods that factor in time value of money because the recurring fees are estimates anyway.
    • Make sure that 1-year contract is really a 1-year contract and not effectively a 13-month, 15-month or even longer contract by ensuring the following:
    • The contract’s start date is the first date for which the recurring billing begins. This is useful in determining the default end date of the contract. For example:
      If you agree to a 1-year contract with monthly billing when the first monthly bill will be for services provided April, 1, 2010 through April 20, 2010, then the default termination date for the contract is March 31, 2011. If the service provider estimates 3 months for implementation that ends on June 30, 2010 and they charge you the monthly services for April, May and June, don’t let the vendor tell you the contract start date is July 1. If you paid the monthly fees for services provided on April 1, then the start date is April 1.
    • If the vendor charges you fractional monthly fees for the implementation period and/or charges you one-time set up fees, then you should negotiate and agree on a contract end date that is fair to both parties. Use this guideline: The contract commitment should aim towards a certain money target (revenue for the vendor). If the implementation fees are equivalent to say, 3 months of recurring billing, you might agree that end date is after 9 months of the first recurring billing cycle.

Infrastructure & Facilities

This item, infrastructure & facilities, is beyond the scope of this article. It includes the buildings, electric power, generators, climate control, physical security and related staffing.

Understanding SLA Uptime Guarantee Percentages

The table below helps illustrate why you should ensure that the “SLA Uptime Measurement Meter Reset Window” is the same duration as your billing cycle or shorter.

Availability % Downtime per year Downtime per month (30 days) Downtime per week Downtime per day
90 “one nine” 36.5 days 72 hours 16.8 hours 2.4000 hours
95 18.25 days 36 hours 8.4 hours 1.2000 hours
97 10.96 days 21.6 hours 5.04 hours 43.2000 minutes
98 7.3 days 14.4 hours 3.36 hours 28.8000 minutes
99 “two nines” 3.65 days 7.2 hours 1.68 hours 14.4000 minutes
99.5 1.83 days 3.6 hours 50.4 minutes 7.2000 minutes
99.8 17.52 hours 86.23 minutes 20.16 minutes 2.8800 minutes
99.9 “three nines” 8.76 hours 43.8 minutes 10.1 minutes 1.4400 minutes
99.95 4.38 hours 21.56 minutes 5.04 minutes 43.2000 seconds
99.99 “four nines” 52.56 minutes 4.32 minutes 1.01 minutes 8.6400 seconds
99.999 “five nines” 5.26 minutes 25.9 seconds 6.05 seconds 0.8640 seconds
99.9999 “six nines” 31.5 seconds 2.59 seconds 0.605 seconds 0.0864 seconds
99.99999 “seven nines” 3.15 seconds 0.259 seconds 0.0605 seconds 0.0086 seconds

Sources: Wikipedia, my calculations

This article is part of a series titled “Guide for the CTO: A compilation of articles on how to lead and manage technologies, projects and people”.

Checklist for Migration of Web Application from Traditional Hosting to Cloud

In 2010, Cloud Computing is likely to see increasing adoption. Migrating Web applications from one data center to another is a complex project. To assist you in migrating Web applications from your hosting facilities to cloud hosting solutions like Amazon EC2, Microsoft Azure or RackSpace’s Cloud offerings, I’ve published a set of checklists for migrating Web applications to the Cloud.

These are not meant to be comprehensive step-by-step, ordered project plans with task dependencies. These are checklists in the style of those used in other industries like Aviation and Surgery where complex projects need to be performed. Their goal is get the known tasks covered so that you can spend your energies on any unexpected ones. To learn more about the practice of using checklists in complex projects, I recommend the book Checklist Manifesto by Atul Gawande.

Your project manager should adapt them for your project. If you are not familiar with some of the technical terms below, don’t worry: Your engineers will understand them.

Pre-Cutover Migration Checklist

The pre-cutover checklist should not contain any tasks that “set the ship on sail”, i.e. you should be able to complete the pre-cutover tasks, pausing and adjusting where needed without worry that there is no turning back.

  • Set up communications and collaboration
    • Introduce migration team members to each other by name and role
    • Set up email lists and/or blog for communications
    • Ensure that appropriate business stakeholders, customers and technical partners and vendors are in the communications. (E.g. CDN, third-party ASP)
  • Communicate via email and/or blog
    • Migration plan and schedule
    • Any special instructions, FYI, especially any disruptions like publishing freezes
    • Who to contact if they find issues
    • Why this migration is being done
  • Design maintenance message pages, if required
  • Setup transition DNS entries
  • Set up any redirects, if needed
  • Make CDN configuration changes, if needed
  • Check that monitoring is in place and update if needed
    • Internal systems monitoring
    • External (e.g. Keynote, Gomez)
  • Create data/content migration plan/checklist
    • Databases
    • Content in file systems
    • Multimedia (photos, videos)
    • Data that may not transfer over and needs to be rebuilt at new environment (e.g. Search-engine indexes, database indexes, database statistics)
  • Export and import initial content into new environment
  • Install base software and platforms at new environment
  • Install your Web applications at new environment
  • Compare configurations at old environments with configurations at new environments
  • Do QA testing of Web applications at new environment using transition DNS names
  • Review rollback plan to check that it will actually work if needed.
    • Test parts of it, where practical
  • Lower production DNS TTL for switchover

During-Cutover Migration Checklist

  • Communicate that migration cutover is starting
  • Data/content migration
    • Import/refresh delta content
    • Rebuild any data required at new environment (e.g. Search-engine indexes, database indexes, database statistics)
  • Activate Web applications at new environment
  • Do QA testing of Web applications at new environment
  • Communicate
    • Communicate any publishing freezes and other disruptions
    • Activate maintenance message pages if applicable
  • Switch DNS to point Web application to new hosting environment
  • Communicate
    • Disable maintenance message pages if applicable
    • When publishing freezes and any disruptions are over
    • Communicate that the Web application is ready for QA testing in production.
  • Flush CDN content cache, if needed
  • Do QA testing of the Web application in production
    • From the private network
    • From the public Internet
  • Communicate
    • The QA testing at the new hosting location’s production environment has passed
    • Any changes for accessing tools at the new hosting location
  • Confirm that DNS changes have propagated to the Internet

Post-Cutover Migration Checklist

  • Cleanup
    • Remove any temporary redirects that are no longer needed
    • Remove temporary DNS entries that are no longer needed
    • Revert any CDN configuration changes that are no longer needed
    • Flush CDN content cache, if needed
  • Check that incoming traffic to old hosting environment has faded away down to zero
  • Check that traffic numbers at new hosting location don’t show any significant change from old hosting location
    • Soon after launch
    • A few days after launch
  • Check monitoring
    • Internal systems monitoring
    • External (e.g. Keynote, Gomez)
  • Increase DNS TTL settings back to normal
  • Archive all required data from old environment into economical long-term storage (e.g. tape)
  • Decommission old hosting environment
  • Communicate
    • Project completion status
    • Any remaining items and next steps
    • Any changes to support at new hosting environment

The checklists are also published on the RevolutionCloud book Web site at www.revolutioncloud.com/2010/01/checklists-migration/ and on the Checklists Wiki Web site at www.checklistnow.org/wiki/IT_Web_Application_Migration

Save Money On Hosting & CDN By Optimizing Your Architecture & Applications

If you manage technology for a company that has a large Web presence, it is likely that a large percentage of your total technology costs is spent on the Web hosting environment, including the Content Delivery Network (CDN, e.g. Akamai, LimeLight, CDNetworks, Cotendo). In this article, we discuss some ways to manage these costs.

Before we discuss how to optimize your architecture and applications to have economical and the optimally low hosting expenses, let us develop a model for comprehensively understanding a site’s Web hosting costs.

Step 1. Develop a model for allocating technology operations & infrastructure costs to each Web site/brand

Let us assume for this example that your company operates some medium to large Web sites and spends $100K/month on fully managed1 origin2 Web hosting and another $50K/month on CDN. That means your company spends $1.8MM/year on Web sites hosting.

It is important to add origin Web hosting and CDN costs to know your true Web hosting costs, especially if you operate multiple Web brands and need to allocate Web hosting costs back to each. For example, let us assume you have two Web sites: brandA.com, a dynamic ecommerce site costing $10K/month on origin hosting plus $2K/month on CDN; and brandB.com, serving a lot of videos and photos costing $5K/month on origin hosting plus $19K/month on CDN. In this example, brandA.com actually costs $12K/month, which is half the hosting cost of brandB.com, $24K/month. Without adding the CDN costs, you may mistakenly assume the opposite that brandA.com costs twice as much to host as brandB.com. Origin hosting and CDN are two sides of the same coin. We recommend that you manage them both together from both technology/architecture and budget perspectives.

Then you add the costs of third-party vendor provided parts of the site rented in the software-as-service model. Next, add licensed software costs used at your hosting location. Let us assume that brandA.com also has:

  • some blogs hosted at wordpress.com for $400/month
  • Google Analytics for $0/month
  • Other licensed platform/application software running on your servers billed separately from the managed hosting. Let us assume brandA.com’s share of that is $1,000/month.

So your Web hosting and infrastructure costs for brandA.com would be $13,400/month. That’s $160,800/year.

Assuming that many of your Web sites share infrastructure and systems management & support staff at your Web hosting provider, you may not have a precise allocation of costs to each brand. That’s ok: It doesn’t need to be perfect nor a staff-time consuming calculation every month. Work with your hosting provider and implement a formula/algorithm that provides a reasonably good breakup and needs to be changed only when there is a major infrastructure change.

Side Note: In order to stay competitive, adapt to changes in the market and meet changing customer sites, brandA.com also needs to do product and software development on a regular basis. However, that’s beyond the scope of this discussion. Managing ongoing product and software development costs for brandA.com could be the subject of another article.

Step 2. Regularly review the tech operations costs for each brand and make changes to control costs

Every month, review your tech operations costs for your business as a whole and for each brand. Make changes in technology and process as needed to manage your expenses. If you don’t review the expenses on a monthly basis, you run the risk of small increases happening in various places every month that add up to a lot.

Without active management done on a monthly basis, brandA.com could creep up from $13,400 to $16,000 the next month and $20,000 the month after. That $1.8MM you were expecting to spend on hosting for the year could turn out to be $2.4MM.

So what does such active management include?

Monitor and manage your bandwidth charges. This is one to keen an eye on. If you bandwidth charges go over your fixed commit, your expenses can quickly blow over budget. If you find bandwidth use increasing, investigate the cause and make course corrections. In some cases, this may simply be due to expected increase in traffic, but in other cases it could be avoided. A related article about taking advantage of browser caching to lower costs provides some tips.

Request your engineers to monitor and manage your servers resource usage (CPU, memory) so that the need for adding hardware can be avoided as much as possible. Enable and ensure regular communications between your technology operations team and your software development team so that software developers are alerted of any application behavior that is consuming more than expected server resources. Give the software developers time to resolve such issues when found.

Review the invoice details to make sure you understand and are in agreement with the invoice. A Web hosting bill can be very detailed and complex to understand. Do not hesitate to ask the hosting provider to explain and justify anything that you don’t understand. Don’t just assume the bills are always correct. They could (and occasionally will) be mistakes in the bills. Be sure to dispute these with the vendor in a respectful and friendly way.

These are just some examples. Please feel welcome to make more suggestions via comments on this post.

The time (and thus money invested) in controlling tech operations cost will be well worth the savings / avoidance of huge cost increases.

Keep abreast of evolving technologies and cost saving methods. Periodically review these with your vendor(s).

Cloud computing is exciting as a technology, and it is equally exciting as a pricing model.

If you find market conditions have changed drastically, request your vendor to consider lowering rates/prices even if you are locked into a contract. You don’t lose anything by asking and the vendor’s response will be an indicator of their customer service and long term business interest with you.

  1. Fully managed Web hosting includes network & hardware infrastructure, 24×7 staff and real estate []
  2. The origin part of your Web hosting environments includes the network and server infrastructure at your hosting facility location(s) where your Web applications and installed and running. It could be in-house data centers or at providers such as RackSpace, IPSoft or Savvis []

Save Your Company Money In Monthly Bills Using Browser Caching

Bandwidth MoneyCompanies that operate heavily trafficked Web sites can save thousands of dollars every month by maximizing their use of browser-side caching.

Large Web sites pay for bandwidth at their Web hosting data center and also at their content delivery network (CDN, e.g. Akamai, LimeLight, CDNetworks). Bandwidth costs add up to huge monthly bills. On small-business or personal Web sites where bandwidth costs don’t go over, this is not an issue, but on large Web sites, this is important to address and monitor.

Companies operating large Web sites often have complex situations like the following:

  • An comprehensive and deep understanding of all technology cost drivers and their impacts on each other. For example, a programmer may think they are saving the company money by architecting an application in a way that it requires minimal hardware servers, but not realize that the same design actually results in even higher costs elsewhere like CDN bills.
  • Busy development teams working on multiple projects on tight timelines. This results in compromises between product features/timelines and technical/architectural best practices/standards.
  • Web content management and presentation platform(s) that have evolved over the years
  • Staff churn over the years and an uneven distribution of technical knowledge and best practices about the Web site(s)
  • The continued following of some obsolete “best practices” and standards that were established long ago when they were beneficial, but are now detrimental.

Tech teams at complex Web sites would likely find upon investigation that their Web sites suffer from problems that they either didn’t know about or didn’t know the extent of the damage they are causing.

One such problem is that certain static objects on the company’s Web pages that should be cached by the end users’ Web browsers are either not cached by the browsers at all or not cached enough. Some objects are at least cached by the CDN used by the company, but some perfectly cacheable objects are served all the way back form the origin servers for every request! An unnecessarily costly situation that can be avoided.

In addition to wasteful bandwidth charges resulting in high monthly bills, there are also other disadvantages caused by cacheable objects being unnecessarily served from origin servers:

  • They slow down your Web pages. Instead of the browser being able to use local copies of these objects, it has to fetch them all the way from your origin servers.
  • Unnecessary load on origin Web servers and network equipment at Web hosting facility. This can be an especially severe problem when a Web site experiences a sudden many-fold increase in traffic caused by a prominent incoming link on the home page of a high traffic like Yahoo, MSN or Google.
  • Additional storage in logs at the origin Web hosting locations’ servers and other devices.
  • Unneeded processing and work the origin servers, network equipment, CDN, the Internet in the middle all the way up to the client browsers have to do to transfer these objects from origin to the end user’s browser. Be environmentally friendly and avoid all this is costly waste.

The increase in bandwidth, load on servers and networking equipment and log file storage space increases caused by a few objects on Web pages being served by origin servers for every request may mistakenly seem like an insignificant problem, but little drops of water make the mighty oceans. Some calculations will show that for large Web sites, the cost of this can add up to tens of thousands of dollars a month in bandwidth costs alone.

How should companies operating large Web sites solve this problem?

For technology managers:

  • Make it a best practice to maximize the use of browser-side caching on your Web pages. Discuss this topic with the entire Web technology team. Awareness among the information workers is important so that they can keep this in mind for future work and also address what’s already in place. Show the engineers some sample calculations to illustrate how much money is wasted in avoidable bandwidth costs: that will prove this is not an insignificant issue.
  • If this problem is widespread in your Web site(s), make the initial cleanup a formal project. Analyze how much money you’d save and other problems you’d solve by fixing this and present it to the finance and business management. Once you show the cost savings, especially in this economy, this project will not be hard to justify.

For engineers:

  • Read the article about optimizing caching at Google Code for technical details on how to leverage browser and proxy caching. It explains the use of HTTP headers like Cache-Control, Expires, Last-Modified, and Etag.
  • Review any objects that are served by origin servers every time for legacy reasons that may now be obsolete.
  • Combine some JavaScript files commonly used by your Web pages so that the one unified and shared file would have higher caching probability. Do the same with external CSS style sheets.
  • Study a good book on Web site optimization like Even Faster Web Sites: Performance Best Practices for Web Developers. Share these recommendations and hold a discussion with your tech and production colleagues.

Using Amazon Elastic Block Store (EBS) with an EC2 Instance

Amazon AWS Logo

One of the differences between Amazon EC2 server instances and normal servers is that the server’s local disk storage state (i.e. changes to data) on EC2 instances does not persist over instance shutdowns and powering on. This was mentioned in my earlier post about hosting my Web site on Amazon EC2 and S3,

Therefore, it is a good idea to store your home directory, Web document root and databases on an Amazon EBS volume, where the data does persist like in a normal networked hard drive. Another benefit of using an Amazon EBS volume as a data disk is that it separates your operating system image from your data. This way, when you upgrade from a server instance with less computing power to one with more computing power, you can reattach your data drive to it for use there.

You can create an EBS volume and attach it to your EC2 server instance using a procedure similar to the following.

First, create an EBS volume.

You can use Elasticfox Firefox Extension for Amazon EC2 to:

  • create a EBS volume
  • attach it to your EC2 instance
  • alias it to a device, In this example, we use /dev/sdh

Then attach the “disk” to your EC2 instance and move your folders to it using a procedure similar to the following commands issued from a bash shell.

# Initialize (format) the EBS drive to prepare it for use
# Note: replace /dev/sdh below with the device you used for this EBS drive
mkfs.ext3 /dev/sdh
#
# Create the mount point where the EBS drive will be mounted
sudo mkdir /mnt/rj-09031301
# Side note: I use a naming convention of rj-YYMMDDNN to assign unique names
# to my disk drives, where YYMMDD is the date the drive was put into service
# and NN is the serial number of the disk created that day.
#
# Mount the EBS drive
sudo mount -t ext3 /dev/sdh /mnt/rj-09031301
#
# Temporarily stop the Apache Web server
sudo /etc/init.d/apache2 stop

#
# Move the current /home folder to a temporary backup
# This temporary backup folder can be deleted later
sudo mv /home /home.backup

#
# Symbolic link the home folder on the EBS disk as the /home folder
sudo ln -s /mnt/rj-09031301/home /home
#
# Start the Apache Web server
sudo /etc/init.d/apache2 start

Limitations:

One current limitation of EBS volumes is that a particular EBS disk can only be attached to one server instance at a given time. Hopefully, in a near future version upgrade of EC2 and EBS, Amazon will enable an EBS volume to be attached to multiple concurrent server instances. That will enable EBS to be used similar to how SAN or NAS storage is used in a traditional (pre cloud computing era) server environment. That will enable scaling Web (and other) applications without having to copy and synchronize data across multiple EBS instances. Until Amazon adds that feature, you will need to maintain one EBS disk per server and keep their data in sync. One method of making the initial clones is to use the feature that creates a snapshot of an EBS volume onto S3.

Related article on Amazon’s site:

Opinion on the Amazon S3 Outage; Checklist for Dealing with Outages

My journalist colleagues at Wired.com published some of my comments related to Amazon S3.1 Wired also posted another article titled Customers Shrug Off S3 Service Failure. I agree with the views of many of the customers expressed in the article. Don MacAskill, CEO of the popular photo hosting site Smugmug, wrote an understanding post about it.

My entire career working for media companies, I’ve held firm the belief that the uptime, reliability, performance, scalability, performance and security of commercial Web sites is of paramount importance. When sites that I’ve been responsible for have had issues, my colleagues and I have given our personal time and energy to resolution. With my teams, I spend considerable time on proactive measures. I’ve had the honor of working closely with and learning from some who do an excellent job running technology operations.

Experience has taught that things can and sometimes do go wrong. Sometimes calculated risks don’t pan out. Sometimes mistakes cause problems. We are human. We should strive for perfection; we can get close to it, but not fully attain it. We should be prepared for such scenarios. When they happen, we should work diligently and expeditiously on resolution and have frequent and honest communications with stakeholders and customers. Such communications during the incident should include:

Update 2010-Jan-24: This checklist is now maintained on the Checklists Wiki Web site at:

www.checklistnow.org/wiki/IT_Incident_Reporting

During-Incident Communication Checklist

  • Current status
  • What is the full impact?
  • Estimated time to resolution
  • Any recommended workarounds until resolution, if practical
  • Assurance that it is being worked on
    • It often helps to mention who all are working on it and what they are doing

The post-incident communications to stakeholders and customers should include:

Post-Incident Communication Checklist

  • Summary
  • What happened, how and why it happened?
    • Including full description of all impact
    • Do not blame2 third-parties or say things like “beyond our control”. A technology leader takes responsibility equally for both insourced and outsourced products and services.3
  • How it was resolved
    • If the resolution is temporary or long-term
  • Next steps
  • Plan for eliminating or minimizing this and similar incidents from happening again
  • Thank all those who helped resolve and the customers for their understanding
  • Mention the monetary credits you plan to give as per the Service Level Agreement (SLA)
    • Specify any additional ‘make goods’ or returns you plan to make to the customers above and beyond the credits as per SLA, if appropriate.

Stakeholders and customers here refer to internal customers of the technology operations team (e.g. the concerned folks in editorial, marketing, sales, finance, legal and other departments). External communications to the public Internet should be handled in consultation with legal and public relations.

S3′s outage (or any outage) isn’t to be taken lightly, but I have faith Amazon and their customers will learn from it.

Disclaimers:

  • As explained in the terms of use of this site, any opinions expressed on my personal Web site do not reflect those of any employer, past or present. My Web site and I in my personal life neither represent nor speak for any corporation.
  • I have no affiliation, financial or otherwise with Amazon.com. I happen to be a user of their products and services, some of which I like and some that I don’t.
  • Personal Web sites like this are exempt from the performance requirements of corporate Web sites :-) My personal Web site is for expressing, learning and R&D. It also happens to be hosted on Amazon EC2 and S3.
  1. Silicon Alley Insider and ValleyWag have amusing spins on it. :-) []
  2. There may be extreme instances, especially when criminal activity or malicious wrongdoing was the cause where it would be appropriate to blame someone. []
  3. It is ok to mention service providers, or describing external events for explaining what happened, but don’t do it in a “it was their fault, not ours” tone. The technology leader should factually describe what happened and take responsibility. []

This Web Site is Now Hosted on Amazon EC2 & S3

This web site, www.rajiv.com is now hosted on Amazon.com’s Elastic Compute Cloud (EC2) and Simple Storage Service (S3) services. They are part of Amazon Web Services offerings. If you are a technologist, I recommend EC2 and S3. To learn more about them, you can follow the links in this article.

Benefits of hosting a Web site on EC2 & S3

  • The hosting management is self-service. Anytime you want, you can provision additional servers yourself and immediately. Unlike with most traditional hosting companies, there is not need to contact their staff and have to wait for them to set up your server. On EC2, once you have signed up for an account and set up one server, you can provision (or decommission) additional servers within minutes. Even the initial setup is self-service.
  • EC2 enables you to increase or decrease capacity within minutes. You can commission one or hundreds of server instances simultaneously. Because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs. Billing is metered by an hour as the unit. This flexibility of EC2 can benefits many use cases:
    • If your web sites get seasonal traffic (e.g. a fashion site during shows) or can temporarily get much higher traffic for a period of time (e.g. a news site), EC2′s business model of pay for what you use by the hour, is cost-effective and convenient.
    • If yours is the R&D or Skunkworks group at a large or medium size organization or a startup company with limited financial resources, renting servers from EC2 can have many benefits. You don’t have to make a capital investment to get a server farm up and running, nor make long-term financial commitments to rent infrastructure. You can even turn off servers when not in use, greatly saving costs.
  • It allows me to use the modern Ubuntu1 GNU/Linux operating system, Server Edition. Among Ubuntu’s many benefits are its user friendliness and ease of use. Software installations and upgrades are a breeze. That means less time is required to maintain the system while retaining the flexibility and power being a systems administrator gives.
  • EC2 has lower total cost ownership for me than most hosting providers’ virtual hosting or dedicated server plans. Shared (non virtual server) hosting is still cheaper, but no longer meets my sites’ requirements.2

Potential drawbacks/caution with EC2 & S3

  • While S3 is persistent storage, EC2 virtual server instances’ storage does not persist across server shutdowns. So if your web site is running a database and storing files on an EC2 instance, you should implement scheduled, automated scripts that regularly back up your database and your files to S3 or other storage.
    • Consistent with what I read in some comments online, my EC2 virtual server instance did not lose its file-system state or settings when I rebooted it. So rebooting seems to be safe.3
    • This potential drawback is arguably a good thing in some ways. It compels you to implement a good backup and recovery system.
    • This also means that after installing all the software on your running Amazon Machine Image (AMI), you should save it by creating a new AMI image of it as explained in the Creating an Image section of the EC2 Getting Started Guide.
      • This is an issue since you may want to do this every time after you update your software, especially with security patches. Until Amazon implements persistent storage for EC2 instances, you could do this monthly. You can script this to be partly or fully automated. Since Amazon’s EC2 instances are quite reliable, this is not a major concern.
  • An EC2 instance’s IP address and public DNS name persists only while that instance is running. This can be worked around as described under the tech specs section below.

Some articles about Amazon’s hosting infrastructure services:

Tech specs of my site:

  1. www.ubuntu.com []
  2. I plan to split rajiv.com into separate sites, The India Comedy site will move to comedy.rajiv.com and the SPV Alumni site will move to spv.rajiv.com. The latter two are community sites and will benefit from a community CMS like Drupal. []
  3. However, please be aware of a known issue that on some occasions caused instance termination on reboots. []
  4. I created my AMI virtual machine by building on top of a public Ubuntu AMI by Eric Hammond. []