Changing the delivery of IT

Tony Bishop

Subscribe to Tony Bishop: eMailAlertsEmail Alerts
Get Tony Bishop: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: RIA Developer's Journal, Cloud Computing, Virtualization Magazine, SOA & WOA Magazine, ERP Journal on Ulitzer, SOA Testing

RIA & Ajax: Article

Are AJAX, Virtualization, Cloud Computing, and SOA Related?

What is "service-oriented" virtualization"?

Three Types of Virtualization for SOA
There are three distinct ways that the enterprise can apply Virtualization concepts in SOA:

  1. Hardware Virtualization involves running multiple copies of the operating system as virtual machines (VMs) within one physical hardware device. This offers some great cost, flexibility and risk management benefits for the internal applications running in the data center - as well as providing a useful way to replicate test beds for SOA systems.
  2. Virtual Endpoints allow the SOA to define virtual locations for services that need to be invoked, when in fact you're completely shielded from the actual end point of the service. This is ideal for the dynamic processes inherent in SOA applications, as the physical address (or URL) of a service may need to change depending upon when and how it is used as part of a given workflow.
  3. Virtual Services are not just useful for SOA testing. They can provide value by streamlining development and deployment practices as a whole.

This article focuses on the third type of virtualization - virtual services - which happens outside the data center. For the rest of the SOA application lifecycle, our ability to create virtual test beds only goes so far. Businesses often rely on actual live implementation for the purposes of validating and developing for SOA; however, these complex interconnected environments cannot be replicated by hardware virtualization techniques. We need to extend virtualization into the actual distributed software components and services running on those environments.

The Challenge: If SOA Can't Virtualize, It's Not Agile
Virtualization at the hardware and data center level generates an almost immediate payback in saved operating costs - potentially saving several million dollars in IT costs on a near-immediate basis.

However, when we distribute component- or service-development tasks across multiple teams, we often forget that these teams still need access to live versions of the rest of the application in order to complete their own development and testing goals. There is still a high level of dependency and interconnectedness between all of those teams to deliver a completed workflow. For larger-scale enterprise systems, this puts a harsh limit on the ROI of SOA.

There is a way to connect these two technologies using service-oriented virtualization, or SOV: the strategy of simulating the behavior of deployed software assets, and the synthetic construction of those not in existence, that make up an enterprise SOA application. Maximizing the value of SOA on a larger, enterprise-wide scale is difficult, if not impossible, without also leveraging SOV.

Challenges: Stumbling Blocks for SOA
Companies adopt SOA best practices to realize business agility and cost benefits. Unfortunately, when the SOA application attempts to scale to meet the real-world needs of larger enterprises, the best-laid architectural and governance strategies for SOA still fall short, even with virtualized servers. There are several reasons why this happens.

Contention for Shared System Resources
SOA is all about leveraging enterprise systems by offering them up as shared services. However, the problem of access to shared resources plagues every single SOA initiative. A manager of a key ERP system or mainframe may be protective of their application in production and limit development and testing teams from directly accessing the application to avoid unforeseen issues (see Figure 1).

In addition, even if access is allowed, live services are often constrained by the demands of multiple organizations in an SOA environment. Agility suffers when teams are forced to queue up for access to a realistic environment to test and develop against. In larger-scale enterprise applications, creating another instance of the environment through hardware virtualization alone is cost-prohibitive.

Discontinuous Development and Integration Life Cycles
Developers need modeled service interfaces as placeholders to determine how their services will interoperate with others. For example, one development team is building out customer data, while a second team of developers is creating account data. The two teams will rely on each other's services as the applications are being developed in tandem. Each team is relying upon access to near-finished or implemented services to prove that their own services interoperate correctly.

SOA enables agility by loosely coupling components as services, so they can be developed and integrated in parallel by smaller, more distributed teams. How can we actually achieve such a level of parallelism when there are still dependencies? Picture the typical project plan or Gantt chart (see Figure 2). There is always a next Dependency of an available component in the project that must be met before the next development team can continue on the next component. This is exactly the mold we are hoping to break with SOA.

Increased Complexity and Heterogeneity
While a number of initiatives for doing SOA are Web services (WSDL/SOAP) centric, only about 50% of the SOA initiatives at best-in-class companies are Web services based. There are a variety of technologies being used to create SOA middleware, which may be very valid, and possibly better for a given organization than a Web services stack, for instance, using an ESB with little reliance on Web services. To ensure SOA quality, teams need to validate the implementation and side-effects that occur across heterogeneous technologies, as opposed to just testing their own selected Web service or middleware layer.


More Stories By John Michelsen

John Michelsen is co-Founder and “Chief Geek” at iTKO. He has over twenty years of experience as a technical leader at all organization levels, designing, developing, and managing large-scale, object-oriented solutions in traditional and network architectures. He is the chief architect of iTKO's LISA cloud virtualization and testing product and a leading industry advocate for efficient software development and quality. Before forming iTKO, Michelsen was Director of Development at Trilogy Inc., and VP of Development at AGENCY.COM. He has been titled Chief Technical Architect at companies like Raima, Sabre, and Xerox while performing as a consultant. Through work with clients like Cendant Financial, Microsoft, American Airlines, Union Pacific and Nielsen Market Research, John has deployed solutions using technologies from the mainframe to the handheld device.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.