Arc of Change


We’ve talked to a lot of people over the past three months about the challenges enterprises face when migrating to new systems and processes.

Almost without exception, each conversation soon zeroed in on the problems associated with modernizing existing legacy systems, especially multi-tier applications that are split between multiple servers - the classic application, database and web tiers that are implemented separately but are highly dependent upon each other.

We heard this a lot:

“How can we migrate our application without rewriting everything? We know we should modernize and understand that having our systems in the cloud will reduce costs, but we don’t have the time or budget to redesign it.

If we can’t duplicate it first and then slowly refactor things, we’re stuck here on our old systems.”

We’ve spent a lot of time on this part of the migration problem & I’ll cover it in detail in another post. Today’s post explores the background.

Systems Relationships

Anyone who has developed, implemented or supported a three-tier application over the past 20 years understands the critical - and fragile - relationship each tier’s server has with the others. Special network segments and firewall rules are often required to allow communication across the app’s domain, and in some less-than-robust legacy apps these relationships are hard-coded into the application itself.

In order for the app to function, these relationships must be maintained. The app server must be recreated along with the database and web server, each with its corresponding pieces of the stack deployed to it. In the legacy approach, this meant deploying each piece to a separate server that was probably already running the stack. A deployment would simply reuse all the previous (hand-rolled) production configuration changes that had emerged over time.

In more modern shops, a deployment would consist of spinning up new VMs from vmware, preconfigured with all the necessary rules and then deploy the just-built application components to that VM. This approach is light-years ahead of the one above in that it is truly repeatable. The application can, generally, be fully deployed using automation and doesn’t depend on pre-existing systems. It’s still limited tho.

You’ll never be able to deploy more VMs than you have physical capacity for and if your underlying host hardware - where the guest VMs are created - fails, then you’re sunk.

In order to fully modernize this approach, enterprises must decouple the repeatability challenge from the hardware constraints. If you’re still deploying using the second approach above, you’re only doing this half-right. You’ll need to remove that hardware constraint, too.

Data Centers and Colocation

While these conversations have been ongoing, we’ve noticed something happening in the data center market, at least regionally here in the Denver area: two data center companies, Viawest and Fortrust, appear to be growing. They’re both doing very public buildouts of new facilities that frankly look fantastic. There are probably other DC companies doing less-visible stuff that reflects their business health, too.

This got me wondering about what’s going on here. Why are these regional DCs, with revenue streams that are almost exclusively from housing and running customer-owned servers, aka colocation, growing?

If an enterprise wants cloud hosting, why not use Amazon Web Services, Google Compute Platform or Microsoft Azure (AWS, GCP, AZ)? These services are dramatically less expensive per-compute cycle than colocation services. You take on zero capital expenditures for physical hardware and can expand or contract your server footprint at any time.

None of that is possible with equipment that you own, physically racked in a DC where you are paying for every second of power, air handling and (possibly) monitoring in addition to the depreciation costs of the hardware.

So what’s behind this growth?

Rent vs Own

I think it’s a manifestion of the broad arc of change that consists of enterprises moving their computing resources away from the own forever to rent temporarily model .

Right now - in this budget, in this accounting period - it does reduce costs by moving your own hardware to a colocation model. You can put off refactoring the application stack, because you’re keeping all of your existing hardware and systems relationships, but you get perfectly maintained 65 degrees F, super-clean power - with an array of diesel-powered backup generators at the ready - as well the physical security of a DC.

These are costly to maintain in your own DC - trust me, I’ve done exactly that in the past and have made the decision to move my servers to a colocation DC.

It’s initially attractive and buys your organization time to consider whether you’ll refactor at all. Not every legacy app is a candidate for refactoring.

But when you do refactor or start development on a completely new project, you can utilize the rental model. You’ll be able to provision exactly and only the amount of computing power you need, down to a super granual amount of time. Need four multi-core servers for a proof-of-concept that will last less than 30 days? No problem.

You don’t need to buy anything - simply rent those appropriately sized instances on the platform of your choice. This brings the benefits of just-in-time inventory concept - a practice that revolutionized manufacturing by reducing costs to those that are necessary right now - to the previously costly guessing game of purchasing hardware in advance of needs.

Costs and Change

To remain competitive, enterprises can’t spend more than necessary to provide services to their customers. They have to reduce costs and this requires changes to the operating model.

Migration is a key component along this arc of change.

Tincup.io helps enterprises successfully migrate legacy systems, allowing them to embrace the cost-savings & efficiencies of cloud computing even if they aren’t ready to refactor everything today.

Published by in Cloud Computing and tagged business, change, data centers, legacy, migration, servers and tincup using 962 words.