Tag Archives: computing

SHA-3 Hash Generator


SHA-3, originally known as Keccak is a cryptographic hash function. Learn more on WikiPedia…

Enter text below to generate the SHA-3 Hash Code for

Output length in bits:

The SHA-3 Hash Code is below

Hosting Large-Scale Web Sites: Contract Review Guide for the CTO

If you host and operate large-scale Web sites, or negotiate contract agreements with vendors that provide such services, you need to understand what should be included in a Web hosting infrastructure. This knowledge will help you in three areas:

  1. Providing reliability, scalability & good performance
  2. Minimizing risks via security, privacy, regulatory compliance and reduction of vulnerability to potential lawsuits
  3. Reducing and controlling costs

This guide is meant to help you review upcoming contracts as well as existing services.

Likely audience for this article: Managers, directors and vice presidents of technology, operations or finance at organizations operating large-scale Web sites; Executives supervising technology: CTO, CIO, CFO, COO.

Seven Aspects of Large-Scale Web Hosting

Large-scale Web hosting infrastructure and services can be organized into the following seven areas:

  1. Servers & Environments
  2. Network & Other Appliances
  3. Managed Hosting Services
  4. Third-party Provided Services
  5. Program Management Office, PMO
  6. Account Management
  7. Infrastructure & Facilities

Checklist for Review

You can use the following checklist to review your hosting services or a vendor’s proposal.

What to look for

When you review each item below, consider:

  • Is this item included in the vendor’s proposal or in the services we are currently receiving? If it is not included, what are the good reasons it isn’t included?
  • Is this needed for my organization’s current business requirements? Can we do without it? Is it a must have or nice to have for present and reasonable future needs?
  • What are the alternatives?
  • What is the unit price of this item? How does the price scale up as needs grow? How does the price scale down when need for this item decreases?
  • What level of fault-tolerance does this item need? i.e. redundancy, standby backups, time to recover

Some of the above review questions may apply only to things and not apply to services and processes.

Servers

Servers may be physical hardware servers and/or virtual servers managed using software such as VMWare, Parallels Virtuozzo or Xen. The services listed below can each run on separate servers or multiple services can run on a server. It is generally better to have servers running only one (or minimum number) of the major services listed below. That reduces complexity and saves expensive staff time saved maintaining, troubleshooting and recovering. Virtualization makes it economical to have multiple virtual servers on the shared physical hardware economize costs.

The following is a list of commonly found services at large-scale Web sites that require servers.

  • Web
  • Application
    • Content Management software. This is the software that the Editorial and Production teams use to submit, edit, package and manage articles, photos and other Web site content
    • Dynamic Content Assembly. Typically done using Portal Server software, either third-party supplied or in-house developed
    • Data Processing. E.g. workflow engines, jobs/tasks processing servers
    • Middleware
    • Other applications. These are applications that happen to be separate from the main content management system. They could be separate for any number of reasons. E.g. blogs, forums
  • Database

Server Environments

An environment is a self-sufficient set of servers assigned to serve a purpose as described below. Large-scale Web sites typically utilize multiple environments.

  • Production
    • This serves the Web sites to the customers and public.
    • Typically has 99.9% or higher uptime guarantee in the Service Level Agreement
      Please refer to the table below titled Understanding SLA Uptime Guarantee Percentages to compare different time windows when the SLA Uptime measurement gets reset. I recommend that you ensure that the reset window you get is the same duration as your billing cycle (usually monthly) or shorter. This will help avoid having long downtimes without penalty.
  • Staging
    • This is the environment where content packages are developed, integrated and previewed by Editorial, Design and Production teams before they are published to the end-users. For example, when working on a major site redesign or relaunch for several months. Since the tech teams are often making changes to the Development Integration and QA environments, they are not suitable for content integration work by the Editorial and Design teams. Staging is used in large-scale Web sites where mutiple Editors, Designers and Production staff are collaboratively creating content packages and new sections. In smaller Web sites or in cases where just one or two Editors are working on a piece of content like an individual article, previewing is done in the Production environment itself with access controls.
  • Quality Assurance (QA)
    • The QA engineers perform Functional Testing and Load Testing here. Doing functional testing while a load test is running is sometimes a good idea as it simulates usage closer to live production.
  • Development Integration
    • Software product code developed by different engineers is integrated here. There could be continuous integration or nightly builds.
    • This is where developers ensure that their code works with other developers’ code (does not break the build, and does not conflict resulting in undesired functionality)
    • Programmers should ensure that the product works here before handing it off to the QA engineers for testing

In a virtualized system the environments may not be physically separate and may regularly grow and shrink at different times. For example when hosted at a cloud computing provider, the QA environment may scale up during load testing and shut down completely during the hours the QA team is not working.

Network & Other Appliances

These are devices to which various servers are directly or indirectly connected.

Managed Hosting Services

  • Systems Administration
    • This typically includes all the management of the physical hardware up to and including the operating system and popular applications that complement the operating system.
  • Database Administration Services
  • Applications Management Services
    • This typically includes all the administration of the applications that run on top of the operating system.
  • Systems Monitoring, Alerting & Reporting
  • Web Support Help Desk, 24×7

Third-party Services

Program Management Office, PMO

  • Project Management
    • PM people, organization, processes
    • Collaborative project management tools, e.g. JIRA, RallyDev, Mingle
    • Shared documentation management tools, e.g. Wiki
  • Change Management Processes & Tools
    • Documentation system
    • Tools for source control, build & deployment
  • RASIC Matrix Describing Roles & Responsibilities
  • Escalation Flowcharts
  • Crisis Management & Emergency Procedures

Account Management

  • Customer service
  • Relationship management
  • Master Services Agreement, MSA
  • Statements of Work, SOW
  • Service Level Agreement, SLA
    • What to look for in the SLA is the subject of a separate article in this series.
  • Billing
    • Monthly bills provided by telecommunications (telco) and hosting companies tend be extremely complex and lengthy. As a result, they are difficult and time-consuming to review.
    • Always factor in one-time setup fees and any implementation fees paid to the vendor and/or their partners in the total cost of the contract. Don’t look only at the recurring charges. A simple way to do this is this:
      contract cost = implementation fees + (estimated recurring fees x number of recurrences committed to)
      e.g. contract cost for 1 year = setup fees + (estimated monthly charges x 12)
      For most hosting / telco contracts I recommend this simple calculation over more sophisticated methods that factor in time value of money because the recurring fees are estimates anyway.
    • Make sure that 1-year contract is really a 1-year contract and not effectively a 13-month, 15-month or even longer contract by ensuring the following:
    • The contract’s start date is the first date for which the recurring billing begins. This is useful in determining the default end date of the contract. For example:
      If you agree to a 1-year contract with monthly billing when the first monthly bill will be for services provided April, 1, 2010 through April 20, 2010, then the default termination date for the contract is March 31, 2011. If the service provider estimates 3 months for implementation that ends on June 30, 2010 and they charge you the monthly services for April, May and June, don’t let the vendor tell you the contract start date is July 1. If you paid the monthly fees for services provided on April 1, then the start date is April 1.
    • If the vendor charges you fractional monthly fees for the implementation period and/or charges you one-time set up fees, then you should negotiate and agree on a contract end date that is fair to both parties. Use this guideline: The contract commitment should aim towards a certain money target (revenue for the vendor). If the implementation fees are equivalent to say, 3 months of recurring billing, you might agree that end date is after 9 months of the first recurring billing cycle.

Infrastructure & Facilities

This item, infrastructure & facilities, is beyond the scope of this article. It includes the buildings, electric power, generators, climate control, physical security and related staffing.

Understanding SLA Uptime Guarantee Percentages

The table below helps illustrate why you should ensure that the “SLA Uptime Measurement Meter Reset Window” is the same duration as your billing cycle or shorter.

Availability % Downtime per year Downtime per month (30 days) Downtime per week Downtime per day
90 “one nine” 36.5 days 72 hours 16.8 hours 2.4000 hours
95 18.25 days 36 hours 8.4 hours 1.2000 hours
97 10.96 days 21.6 hours 5.04 hours 43.2000 minutes
98 7.3 days 14.4 hours 3.36 hours 28.8000 minutes
99 “two nines” 3.65 days 7.2 hours 1.68 hours 14.4000 minutes
99.5 1.83 days 3.6 hours 50.4 minutes 7.2000 minutes
99.8 17.52 hours 86.23 minutes 20.16 minutes 2.8800 minutes
99.9 “three nines” 8.76 hours 43.8 minutes 10.1 minutes 1.4400 minutes
99.95 4.38 hours 21.56 minutes 5.04 minutes 43.2000 seconds
99.99 “four nines” 52.56 minutes 4.32 minutes 1.01 minutes 8.6400 seconds
99.999 “five nines” 5.26 minutes 25.9 seconds 6.05 seconds 0.8640 seconds
99.9999 “six nines” 31.5 seconds 2.59 seconds 0.605 seconds 0.0864 seconds
99.99999 “seven nines” 3.15 seconds 0.259 seconds 0.0605 seconds 0.0086 seconds

Sources: Wikipedia, my calculations

This article is part of a series titled “Guide for the CTO: A compilation of articles on how to lead and manage technologies, projects and people”.

Mail.app Logo

How to Avoid Duplicate Search Results when using Apple Mail.app with Gmail

I use Gmail’s IMAP feature with my Apple Mac OS’s built in Mail.app program. Mail.app keeps local copies (on all my personal Macs) of all my email messages that I’ve kept (since 1994). It enables me to:

  • Effectively work offline with all my emails (searching, reading and composing), when my computer is not online. That’s sometimes the case when I’m traveling, especially in places where Internet access is unavailable, unreliable, slow, insecure or too expensive.
  • Regularly back up all my saved emails using Apple’s Time Machine. It is also a precaution in case I someday no longer have my Gmail account and/or move to another email service. With email account theft rampant these days, it is important to have up to date backups of all your emails.
  • Send digitally signed and encrypted emails when needed.
  • Compose greeting cards and other visually rich emails with pictures on Mail.app’s stationary.

The Problem:

When you initially set up Mail.app to use Gmail via IMAP, you will observe that when you search your mail using Apple’s built in Spotlight feature, the search results will show duplicate (or more) copies of your email. This is because Gmail’s labels and special views (like “All Mail” or “Starred”) appear as separate IMAP folders in Mail.app. Messages in these seemingly “separate IMAP folders” appear to be duplicates to Mail.app and Spotlight search.

The Solution:

To solve this problem, I suggest showing only essential Gmail special views and labels as IMAP folders to Gmail and then telling Spotlight search to only index the master copies of the messages in Gmail’s “All Mail” folder. To accomplish this, I did the following.

Note: I do the labeling of my messages via the Gmail Web interface and do not need to see the labels applied to messages when I’m using Mail.app. My solution below hides all my custom Gmail labels from Mail.app and that’s fine with me.

In Gmail (via the Web interface)

Go to “Settings > Labs” and activate “Advanced IMAP Controls“. After enabling it, go to “Settings > Labels” and uncheck “Show in IMAPfor each custom Gmail label you have created. Also uncheck it for “Starred” since Mail.app shows to do flags in messages in other folders.

Leave “Show in IMAPchecked yes for “Inbox“, “Sent Mail“, “Drafts“, “All Mail” and “Trash” since these are system folders and Apple Mail.app should be configured to use them. Also leave it checked yes for a label folder called “Apple Mail To Do” which is an Apple Mail system folder.

On your Macs

Go to “System Preferences > Spotlight > Privacy“, exclude the following folders from appearing in search results. Where it says [email protected] below, use your Gmail account name.

~/Library/Mail/IMAP-[email protected]/INBOX.imapmbox

~/Library/Mail/IMAP-[email protected]/[Gmail]/Sent Mail.imapmbox

Also, if you are displaying your starred folder via IMAP, exclude:

~/Library/Mail/IMAP-[email protected]/[Gmail]/Starred.imapmbox

Now when you search messages in your Mac’s Mail.app, only results from your Gmail All Mail folder will appear.

Checklist for Migration of Web Application from Traditional Hosting to Cloud

In 2010, Cloud Computing is likely to see increasing adoption. Migrating Web applications from one data center to another is a complex project. To assist you in migrating Web applications from your hosting facilities to cloud hosting solutions like Amazon EC2, Microsoft Azure or RackSpace’s Cloud offerings, I’ve published a set of checklists for migrating Web applications to the Cloud.

These are not meant to be comprehensive step-by-step, ordered project plans with task dependencies. These are checklists in the style of those used in other industries like Aviation and Surgery where complex projects need to be performed. Their goal is get the known tasks covered so that you can spend your energies on any unexpected ones. To learn more about the practice of using checklists in complex projects, I recommend the book Checklist Manifesto by Atul Gawande.

Your project manager should adapt them for your project. If you are not familiar with some of the technical terms below, don’t worry: Your engineers will understand them.

Pre-Cutover Migration Checklist

The pre-cutover checklist should not contain any tasks that “set the ship on sail”, i.e. you should be able to complete the pre-cutover tasks, pausing and adjusting where needed without worry that there is no turning back.

  • Set up communications and collaboration
    • Introduce migration team members to each other by name and role
    • Set up email lists and/or blog for communications
    • Ensure that appropriate business stakeholders, customers and technical partners and vendors are in the communications. (E.g. CDN, third-party ASP)
  • Communicate via email and/or blog
    • Migration plan and schedule
    • Any special instructions, FYI, especially any disruptions like publishing freezes
    • Who to contact if they find issues
    • Why this migration is being done
  • Design maintenance message pages, if required
  • Setup transition DNS entries
  • Set up any redirects, if needed
  • Make CDN configuration changes, if needed
  • Check that monitoring is in place and update if needed
    • Internal systems monitoring
    • External (e.g. Keynote, Gomez)
  • Create data/content migration plan/checklist
    • Databases
    • Content in file systems
    • Multimedia (photos, videos)
    • Data that may not transfer over and needs to be rebuilt at new environment (e.g. Search-engine indexes, database indexes, database statistics)
  • Export and import initial content into new environment
  • Install base software and platforms at new environment
  • Install your Web applications at new environment
  • Compare configurations at old environments with configurations at new environments
  • Do QA testing of Web applications at new environment using transition DNS names
  • Review rollback plan to check that it will actually work if needed.
    • Test parts of it, where practical
  • Lower production DNS TTL for switchover

During-Cutover Migration Checklist

  • Communicate that migration cutover is starting
  • Data/content migration
    • Import/refresh delta content
    • Rebuild any data required at new environment (e.g. Search-engine indexes, database indexes, database statistics)
  • Activate Web applications at new environment
  • Do QA testing of Web applications at new environment
  • Communicate
    • Communicate any publishing freezes and other disruptions
    • Activate maintenance message pages if applicable
  • Switch DNS to point Web application to new hosting environment
  • Communicate
    • Disable maintenance message pages if applicable
    • When publishing freezes and any disruptions are over
    • Communicate that the Web application is ready for QA testing in production.
  • Flush CDN content cache, if needed
  • Do QA testing of the Web application in production
    • From the private network
    • From the public Internet
  • Communicate
    • The QA testing at the new hosting location’s production environment has passed
    • Any changes for accessing tools at the new hosting location
  • Confirm that DNS changes have propagated to the Internet

Post-Cutover Migration Checklist

  • Cleanup
    • Remove any temporary redirects that are no longer needed
    • Remove temporary DNS entries that are no longer needed
    • Revert any CDN configuration changes that are no longer needed
    • Flush CDN content cache, if needed
  • Check that incoming traffic to old hosting environment has faded away down to zero
  • Check that traffic numbers at new hosting location don’t show any significant change from old hosting location
    • Soon after launch
    • A few days after launch
  • Check monitoring
    • Internal systems monitoring
    • External (e.g. Keynote, Gomez)
  • Increase DNS TTL settings back to normal
  • Archive all required data from old environment into economical long-term storage (e.g. tape)
  • Decommission old hosting environment
  • Communicate
    • Project completion status
    • Any remaining items and next steps
    • Any changes to support at new hosting environment

The checklists are also published on the RevolutionCloud book Web site at www.revolutioncloud.com/2010/01/checklists-migration/ and on the Checklists Wiki Web site at www.checklistnow.org/wiki/IT_Web_Application_Migration

Benefits of Using IRC or Group Chat & Video Conference During Incident Management

When a team of engineers is dealing with a real-time incident, such as a system outage, troubleshooting a problem or dealing with a malicious hacking attack, having excellent communications is critically important. The appropriate communications tool can make a world of difference in dealing with the issue and learning from it afterwards. As important as the engineering work itself is, lack of good communications is what often gets tech teams in trouble.

You should enable real-time communication in certain collaborative tasks. This will reduce unnecessary email traffic and clutter, enable people to to focus better on their tasks,and minimize time wasted in bringing each other up to speed When multiple people are working together in real-time on a near term collaborative task, such as:

  • Crisis Management
  • Troubleshooting
  • Dealing with hacking attacks
  • Build and deployment
  • Web application migration
  • Upgrade or maintenance
  • QA testing

Many companies use a phone conference and/or email to assist in real-time while the collaborative activity above is ongoing. Since Email is not instantaneous and real-time the way a group chat application is, and since email is not a suitable medium for quick questions, and quick one-line responses, smart teams use a real-time group chat tool like IRC (Internet Relay Chat) to enable and facilitate real-time conversation. Benefits of using IRC or a real-time textual group chat tool instead of email are:

  • Tech managers, project managers, crisis managers and new tech people joining the effort can quickly catch up with what has been going on (in any level of detail they want) by reading the IRC history transcript so far. This is a much faster and efficient way than using email or pulling someone away to talk in person asking what has been going on. (If email were to be used instead of IRC, a new person joining in would have missed the previous emails on the topic.)
  • When an engineer working on such a collaborative task steps away for a while and comes back, they can quickly catch up on what transpired while they were away by reading the IRC history transcript.
  • Email is not cluttered by short back and forth messages with lots of text to read and filter
  • The IRC transcript can be used for the post incident retrospective and report (“post-mortem”).
  • Unlike a phone-only conference, the IRC transcript can be read and analyzed to learn lessons from this incident. For example:
    • Analyze what problems the team ran into
    • Analyze what worked and what didn’t
    • Analyze how well people collaborated and communicated
    • Timelines of events

I can personally attest to the above benefits. Over the past 15+ years, my development and operations teams in different companies have regularly used IRC to great advantage. Tools like Wikis and blogs are great for collaboration, documentation and sharing information on projects. An group chat like IRC is an indispensable tool for real-time collaboration.

Update: 2013-Aug-28:

With multi-participant video conferencing becoming commonplace thanks to Google Hangouts, I have updated this post to include video conferencing combined with group text chat.

The rest of this update has moved to its own blog entry titled ‘What I Learned During the Hacking Attacks of August 28, 2013.’

Save Money On Hosting & CDN By Optimizing Your Architecture & Applications

If you manage technology for a company that has a large Web presence, it is likely that a large percentage of your total technology costs is spent on the Web hosting environment, including the Content Delivery Network (CDN, e.g. Akamai, LimeLight, CDNetworks, Cotendo). In this article, we discuss some ways to manage these costs.

Before we discuss how to optimize your architecture and applications to have economical and the optimally low hosting expenses, let us develop a model for comprehensively understanding a site’s Web hosting costs.

Step 1. Develop a model for allocating technology operations & infrastructure costs to each Web site/brand

Let us assume for this example that your company operates some medium to large Web sites and spends $100K/month on fully managed1 origin2 Web hosting and another $50K/month on CDN. That means your company spends $1.8MM/year on Web sites hosting.

It is important to add origin Web hosting and CDN costs to know your true Web hosting costs, especially if you operate multiple Web brands and need to allocate Web hosting costs back to each. For example, let us assume you have two Web sites: brandA.com, a dynamic ecommerce site costing $10K/month on origin hosting plus $2K/month on CDN; and brandB.com, serving a lot of videos and photos costing $5K/month on origin hosting plus $19K/month on CDN. In this example, brandA.com actually costs $12K/month, which is half the hosting cost of brandB.com, $24K/month. Without adding the CDN costs, you may mistakenly assume the opposite that brandA.com costs twice as much to host as brandB.com. Origin hosting and CDN are two sides of the same coin. We recommend that you manage them both together from both technology/architecture and budget perspectives.

Then you add the costs of third-party vendor provided parts of the site rented in the software-as-service model. Next, add licensed software costs used at your hosting location. Let us assume that brandA.com also has:

  • some blogs hosted at wordpress.com for $400/month
  • Google Analytics for $0/month
  • Other licensed platform/application software running on your servers billed separately from the managed hosting. Let us assume brandA.com’s share of that is $1,000/month.

So your Web hosting and infrastructure costs for brandA.com would be $13,400/month. That’s $160,800/year.

Assuming that many of your Web sites share infrastructure and systems management & support staff at your Web hosting provider, you may not have a precise allocation of costs to each brand. That’s ok: It doesn’t need to be perfect nor a staff-time consuming calculation every month. Work with your hosting provider and implement a formula/algorithm that provides a reasonably good breakup and needs to be changed only when there is a major infrastructure change.

Side Note: In order to stay competitive, adapt to changes in the market and meet changing customer sites, brandA.com also needs to do product and software development on a regular basis. However, that’s beyond the scope of this discussion. Managing ongoing product and software development costs for brandA.com could be the subject of another article.

Step 2. Regularly review the tech operations costs for each brand and make changes to control costs

Every month, review your tech operations costs for your business as a whole and for each brand. Make changes in technology and process as needed to manage your expenses. If you don’t review the expenses on a monthly basis, you run the risk of small increases happening in various places every month that add up to a lot.

Without active management done on a monthly basis, brandA.com could creep up from $13,400 to $16,000 the next month and $20,000 the month after. That $1.8MM you were expecting to spend on hosting for the year could turn out to be $2.4MM.

So what does such active management include?

Monitor and manage your bandwidth charges. This is one to keen an eye on. If you bandwidth charges go over your fixed commit, your expenses can quickly blow over budget. If you find bandwidth use increasing, investigate the cause and make course corrections. In some cases, this may simply be due to expected increase in traffic, but in other cases it could be avoided. A related article about taking advantage of browser caching to lower costs provides some tips.

Request your engineers to monitor and manage your servers resource usage (CPU, memory) so that the need for adding hardware can be avoided as much as possible. Enable and ensure regular communications between your technology operations team and your software development team so that software developers are alerted of any application behavior that is consuming more than expected server resources. Give the software developers time to resolve such issues when found.

Review the invoice details to make sure you understand and are in agreement with the invoice. A Web hosting bill can be very detailed and complex to understand. Do not hesitate to ask the hosting provider to explain and justify anything that you don’t understand. Don’t just assume the bills are always correct. They could (and occasionally will) be mistakes in the bills. Be sure to dispute these with the vendor in a respectful and friendly way.

These are just some examples. Please feel welcome to make more suggestions via comments on this post.

The time (and thus money invested) in controlling tech operations cost will be well worth the savings / avoidance of huge cost increases.

Keep abreast of evolving technologies and cost saving methods. Periodically review these with your vendor(s).

Cloud computing is exciting as a technology, and it is equally exciting as a pricing model.

If you find market conditions have changed drastically, request your vendor to consider lowering rates/prices even if you are locked into a contract. You don’t lose anything by asking and the vendor’s response will be an indicator of their customer service and long term business interest with you.

  1. Fully managed Web hosting includes network & hardware infrastructure, 24×7 staff and real estate []
  2. The origin part of your Web hosting environments includes the network and server infrastructure at your hosting facility location(s) where your Web applications and installed and running. It could be in-house data centers or at providers such as RackSpace, IPSoft or Savvis []

Save Your Company Money In Monthly Bills Using Browser Caching

Bandwidth MoneyCompanies that operate heavily trafficked Web sites can save thousands of dollars every month by maximizing their use of browser-side caching.

Large Web sites pay for bandwidth at their Web hosting data center and also at their content delivery network (CDN, e.g. Akamai, LimeLight, CDNetworks). Bandwidth costs add up to huge monthly bills. On small-business or personal Web sites where bandwidth costs don’t go over, this is not an issue, but on large Web sites, this is important to address and monitor.

Companies operating large Web sites often have complex situations like the following:

  • An comprehensive and deep understanding of all technology cost drivers and their impacts on each other. For example, a programmer may think they are saving the company money by architecting an application in a way that it requires minimal hardware servers, but not realize that the same design actually results in even higher costs elsewhere like CDN bills.
  • Busy development teams working on multiple projects on tight timelines. This results in compromises between product features/timelines and technical/architectural best practices/standards.
  • Web content management and presentation platform(s) that have evolved over the years
  • Staff churn over the years and an uneven distribution of technical knowledge and best practices about the Web site(s)
  • The continued following of some obsolete “best practices” and standards that were established long ago when they were beneficial, but are now detrimental.

Tech teams at complex Web sites would likely find upon investigation that their Web sites suffer from problems that they either didn’t know about or didn’t know the extent of the damage they are causing.

One such problem is that certain static objects on the company’s Web pages that should be cached by the end users’ Web browsers are either not cached by the browsers at all or not cached enough. Some objects are at least cached by the CDN used by the company, but some perfectly cacheable objects are served all the way back form the origin servers for every request! An unnecessarily costly situation that can be avoided.

In addition to wasteful bandwidth charges resulting in high monthly bills, there are also other disadvantages caused by cacheable objects being unnecessarily served from origin servers:

  • They slow down your Web pages. Instead of the browser being able to use local copies of these objects, it has to fetch them all the way from your origin servers.
  • Unnecessary load on origin Web servers and network equipment at Web hosting facility. This can be an especially severe problem when a Web site experiences a sudden many-fold increase in traffic caused by a prominent incoming link on the home page of a high traffic like Yahoo, MSN or Google.
  • Additional storage in logs at the origin Web hosting locations’ servers and other devices.
  • Unneeded processing and work the origin servers, network equipment, CDN, the Internet in the middle all the way up to the client browsers have to do to transfer these objects from origin to the end user’s browser. Be environmentally friendly and avoid all this is costly waste.

The increase in bandwidth, load on servers and networking equipment and log file storage space increases caused by a few objects on Web pages being served by origin servers for every request may mistakenly seem like an insignificant problem, but little drops of water make the mighty oceans. Some calculations will show that for large Web sites, the cost of this can add up to tens of thousands of dollars a month in bandwidth costs alone.

How should companies operating large Web sites solve this problem?

For technology managers:

  • Make it a best practice to maximize the use of browser-side caching on your Web pages. Discuss this topic with the entire Web technology team. Awareness among the information workers is important so that they can keep this in mind for future work and also address what’s already in place. Show the engineers some sample calculations to illustrate how much money is wasted in avoidable bandwidth costs: that will prove this is not an insignificant issue.
  • If this problem is widespread in your Web site(s), make the initial cleanup a formal project. Analyze how much money you’d save and other problems you’d solve by fixing this and present it to the finance and business management. Once you show the cost savings, especially in this economy, this project will not be hard to justify.

For engineers:

  • Read the article about optimizing caching at Google Code for technical details on how to leverage browser and proxy caching. It explains the use of HTTP headers like Cache-Control, Expires, Last-Modified, and Etag.
  • Review any objects that are served by origin servers every time for legacy reasons that may now be obsolete.
  • Combine some JavaScript files commonly used by your Web pages so that the one unified and shared file would have higher caching probability. Do the same with external CSS style sheets.
  • Study a good book on Web site optimization like Even Faster Web Sites: Performance Best Practices for Web Developers. Share these recommendations and hold a discussion with your tech and production colleagues.

Using Amazon Elastic Block Store (EBS) with an EC2 Instance

Amazon AWS Logo

One of the differences between Amazon EC2 server instances and normal servers is that the server’s local disk storage state (i.e. changes to data) on EC2 instances does not persist over instance shutdowns and powering on. This was mentioned in my earlier post about hosting my Web site on Amazon EC2 and S3,

Therefore, it is a good idea to store your home directory, Web document root and databases on an Amazon EBS volume, where the data does persist like in a normal networked hard drive. Another benefit of using an Amazon EBS volume as a data disk is that it separates your operating system image from your data. This way, when you upgrade from a server instance with less computing power to one with more computing power, you can reattach your data drive to it for use there.

You can create an EBS volume and attach it to your EC2 server instance using a procedure similar to the following.

First, create an EBS volume.

You can use Elasticfox Firefox Extension for Amazon EC2 to:

  • create a EBS volume
  • attach it to your EC2 instance
  • alias it to a device, In this example, we use /dev/sdh

Then attach the “disk” to your EC2 instance and move your folders to it using a procedure similar to the following commands issued from a bash shell.

# Initialize (format) the EBS drive to prepare it for use
# Note: replace /dev/sdh below with the device you used for this EBS drive
mkfs.ext3 /dev/sdh
#
# Create the mount point where the EBS drive will be mounted
sudo mkdir /mnt/rj-09031301
# Side note: I use a naming convention of rj-YYMMDDNN to assign unique names
# to my disk drives, where YYMMDD is the date the drive was put into service
# and NN is the serial number of the disk created that day.
#
# Mount the EBS drive
sudo mount -t ext3 /dev/sdh /mnt/rj-09031301
#
# Temporarily stop the Apache Web server
sudo /etc/init.d/apache2 stop

#
# Move the current /home folder to a temporary backup
# This temporary backup folder can be deleted later
sudo mv /home /home.backup

#
# Symbolic link the home folder on the EBS disk as the /home folder
sudo ln -s /mnt/rj-09031301/home /home
#
# Start the Apache Web server
sudo /etc/init.d/apache2 start

Limitations:

One current limitation of EBS volumes is that a particular EBS disk can only be attached to one server instance at a given time. Hopefully, in a near future version upgrade of EC2 and EBS, Amazon will enable an EBS volume to be attached to multiple concurrent server instances. That will enable EBS to be used similar to how SAN or NAS storage is used in a traditional (pre cloud computing era) server environment. That will enable scaling Web (and other) applications without having to copy and synchronize data across multiple EBS instances. Until Amazon adds that feature, you will need to maintain one EBS disk per server and keep their data in sync. One method of making the initial clones is to use the feature that creates a snapshot of an EBS volume onto S3.

Related article on Amazon’s site:

I now use a device called Drobo for storing data at my home network (Product Review)

droboI now use a device called Drobo (2nd generation), manufactured by Data Robotics, Inc. as the primary data storage and backup medium at my home location. I have attached it via a USB 2.0 cable to my Apple Airport Extreme wireless network router. The Airport Extreme enables me to share USB 2.0 based storage devices on my home network so they can be simultaneously used by multiple computers. This system of making the same hard disk(s) available to multiple computers in a network is called Network Attached Storage (NAS).

The Drobo replaced my USB 2.0 external hard disk drive manufactured by Western Digital (WD) that was earlier attached to my Airport Extreme. The Drobo has significant advantages over the previous WD drive:

  • Data protection in the event of one hard drive failure as a result of wear and tear due to use over time
  • Ability to increase the storage size of the device as the volume of my data grows (more photographs, music, videos, etc.)

A data protection strategy should include both local fault tolerance and remote storage in an offsite location. For off site storage, I keep copies of my data at online locations like Amazon S3, Smugmug, Google Docs, IMAP mail servers and Apple’s Mobile Me service. Since the volume of my data is in terabytes (~ 15 years of emails, photographs, music, videos), recovering large amounts of data from online locations is reserved for extreme situations when local storage is destroyed or corrupted. The Drobo uses technology to protect data in cases of failure (via normal wear and tear) of one of the hard disks inside the Drobo. The benefit I get is similar to the benefit provided by a set of technologies called RAID.

Unlike most RAID devices for home/small-business use, the Drobo allows me to mix and match hard drives of varying capacities. It has 4 bays to insert hard drives. For example, I can have two 1 TB drives today. Next month, I can add another 1.5 TB drive to the 3rd slot. A few months later, I can add a 2 TB drive to the 4th slot. Then when I need more space next year, I can replace one of the 1 TB drives with a 2 TB drive. As I make these changes, the Drobo will automatically recalculate the optimal distribution of my data across all these drives to maximize its storage space and provide data protection. Adding a new drive or replacing a drive with another is done without downtime. The Drobo stays up and running during disk changes and the data on it remains usable by my computers, even while I’m replacing a drive or when it is redistributing data on the new set of drives after a drive is inserted.

Benefits

  • Save money by buying 1 TB drives today and 2 TB drives when they are cheaper in the future
  • Save money by buying just hard disks for adding storage instead of buying a drive plus an enclosure and power supply adapter for each drive. This is also energy efficient (fewer power adapters) and good for the Environment (fewer drive enclosures made of plastic and power adapter units purchased)
  • Save time by letting the Drobo take care of data protection at the local level. Also save time that would have been spent recovering data from remote locations in the event of a local drive failure.
  • Peace of mind having good data protection at home.

Moving data from the WD external drive to Drobo

I transferred the files by having both drives directly connected to my Apple Macbook Pro: The Drobo to the Firewire 800 port and the WD to the USB 2.0 port.

My MacBook Air‘s Time Machine backups used to be stored on the USB external drive. Since it was attached to an Airport Extreme, the Apple Time Machine backups were stored in a special virtual storage location called a sparsebundle. I just copied the sparsebundle from the WD drive to the Drobo and now my Time Machine backups (including all the Time Machine history of my MacBook Air) are now transferred to my Drobo storage device. Thanks to my Time Machine bundles having been on a sparsebundle, it was easy to transfer them to my new Drobo storage device using a simple copy process using the Mac OS Finder.

Rating: ★★★★★

Ars Technica Interview: IT consumerization in the enterprise

My colleague Jon Stokes who is Senior Editor and co-founder at Ars Technica interviewed me on the topic of IT consumerization, and about the shifting boundary between professional and the personal profiles.

Jon writes:

For this third installment of my series on IT consumerization, I interviewed Rajiv Pant, vice president of technology at CondeNet (the digital arm of Conde Nast Publications). Rajiv is mainly responsible for the company’s online presence, but he’s watching many of the same trends and themes that I’ve outlined in the previous two installments play out across CondeNet.

The full article is at:
http://arstechnica.com/articles/paedia/it-consumerization-enterprise.ars