Archive for the ‘cloud computing’ Category

Clouds, Services, and the Path of Least Resistance

I saw a tweet today, and while I don’t remember it exactly, it went something like this: “You must be successful with SOA to be successful with the cloud.” My first thought was to write up a blog about the differences between infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) and how they each relate to SOA until I realized that I wrote exactly that article a while ago as part of my “Ask the Expert” column on SearchSOA.com. I encourage you to read that article, but I quickly thought of another angle on this that I wanted to present here.

What’s the first vendor that comes to mind when you hear the words “cloud computing”? I’m sure someone’s done a survey, but since I don’t work for a research and analysis firm, I can only give you my opinion. For me, it’s Amazon. For the most part, Amazon is an infrastructure as a service provider. So does your success in using Amazon for IaaS have anything to do with your success with SOA? Probably not, however, Amazon’s success at being an IaaS provider has everything to do with SOA.

I’ve blogged previously about the relationship between ITIL/ITSM and SOA, but they still come from very different backgrounds, ITIL/ITSM being from an IT Operations point of view, and SOA being from an application development point of view. Ask an ITIL practitioner about services and you’re likely to hear “service desk” and “tickets” but not so likely to hear “API” or “interface” (although the DevOps movement is certainly changing this). Ask a developer about services and you’re likely to hear “API,” “interface,” or “REST” and probably very unlikely to hear “service desk” or “tickets”. So, why then does Amazon’s IaaS offering, something that clearly aligns better with IT operations, have everything to do with SOA?

To use Amazon’s services, you don’t call the service desk and get a ticket filed. Instead, you invoke a service via an API. That’s SOA thinking. This was brought to light in the infamous rant by Steve Yegge. While there’s a lot in that rant, one nugget of information he shared about his time at Amazon was that Jeff Bezos issued a mandate declaring that all teams will henceforth expose their data and functionality through service interfaces. Sometimes it takes a mandate to make this type of thinking happen, but it’s hard to argue with the results. While some people will still say there’s a long way to go in supporting “enterprise” customers, how can anyone not call what they’ve done a success?

So, getting back to your organization and your success, if there’s one message I would hope you take away from this, it is to remove the barriers. There are reasons that service desks and ticketing systems exist, but the number one factor has to be about serving your customers. If those systems make it inefficient for your customers, they need to get fixed. In my book on SOA Governance, I stated that the best way to be successful is to make the desired path the path of least resistance. There is very little resistance to using the Amazon APIs. Can the same be said of your own services? Sometime we create barriers by the actions we fail to take. By not exposing functionality as a service because your application could just do it all internally, in-process, we create a barrier. Then, when someone else needs it, the path of least resistance winds up being to replicate data, write their own implementation, or anything other than what we’d really like to see. Do you need to be successful with SOA to be successful with the cloud? Not necessarily, but if your organization embraces services-thinking, I think you’ll be positioning for greater success than without it.

Governance Technology and Portfolio Management

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

David Linthicum continued the conversation around design-time governance in cloud computing over at his InfoWorld blog. In it, he quoted my previous post, even though he chose to continue to use the design-time moniker. At least he quoted the paragraph where I state that I don’t like that term. He went on to state that I was “arguing for the notion of policy design,” which was certainly part of what I had to say, but definitely not the whole message. Finally, Dave made this statement:

The core issue that I have is with the real value of the technology, which just does not seem to be there. The fact is, you don’t need design-time service governance technology to define and define service policies.

Let’s first discuss the policy design comment. Dave is correct that I’m an advocate for policy-based service interactions. A service contract should be a collection of policies, most if not all of which will be focused on run-time interactions and can be enforced by run-time infrastructure. Taking a step backward, though, policy design is really a misnomer. I don’t think anyone really “designs” policies, they define them. Furthermore, the bulk of the definition that is required is probably just tweaking of the parameters in a template.

Now, moving to Dave’s second comment, he made it very clear that he was talking about governance technology, not the actual governance processes. Speaking from a technology perspective, I’ll agree that for policy management, which includes policy definition, all of the work is done through the management console of the run-time enforcement infrastructure. There are challenges with separation of concerns, since many tools are designed with a single administration team in mind (e.g. can your security people adjust security policies across services while your operations staff adjust resources consumption while your development team handles versioning, all without having the ability to step on each other’s toes or do things they’re not allowed to do?). Despite this, however, the tooling is very adequate for the vast majority (certainly better than 80-90% in my opinion) of enterprise use cases.

The final comment from me on this subject, however, gets back to my original post. Your SOA governance effort involves more than policy management and run-time interactions. Outside of run-time, the governance efforts has the closest ties to portfolio management efforts. How are you making your decisions on what to build and what to buy, whether provided as SaaS or in house? Certainly there is still a play for technology that support these efforts. The challenge, however, is that processes that support portfolio management activities vary widely from organization, so beyond a repository with a 80% complete schema for the service domain, there’s a lot of risk in trying to create tools to support it and be successful. How many companies actually practice systemic portfolio management versus “fire-drill” portfolio management, where a “portfolio” is produced on a once-a-year (or some other interval) basis in response to some event, and then ignored for the rest of the time, only to be rebuilt when the next drill occurs. Until these processes are more systemic, governance tools are going to continue to be add-ons to other more mature suites. SOA technologies tried to tie things to the run-time world. EA tools, on the other hand, are certainly moving beyond EA, and into the world of “ERP for IT” for lack of a better term. These tools won’t take over all corporate IT departments in the next 5 years, but I do think we’ll see increased utilization as IT continues its trend toward being a strategic advisor and manager of IT assets, and away from being the “sole provider.”

Governance Needs for Cloud Services

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

David Linthicum started a debate when he posted a blog with the attention grabbing headline of “Cloud computing will kill these 3 technologies.” One of the technologies listed was “design-time service governance.” This led to a response from K. Scott Morrison, CTO and Chief Architect at Layer 7, as well as a forum debate over at eBizQ. I added my own comments both to Scott’s post, as well the eBizQ forum, and thought I’d post my thoughts here.

First, there’s no doubt that the run-time governance space is important to cloud computing. Clearly, a service provider needs to have some form of gateway (logical or physical) that requests are channeled through to provide centralized capabilities like security, billing, metering, traffic shaping, etc. I’d also advocate that it makes sense for a service consumer to have an outgoing gateway, as well. If you are leveraging multiple external service providers, centralizing functions such as digital signatures, identity management, transformations, etc. makes a lot of sense. On top of that, there is no standard way of metering and billing usage yet, so having your own gateway where you can record your own view of service utilization and make sure that it’s line with the what the provider is seeing is a good thing.

The real problem with Dave’s statement is the notion that design-time governance is only concerned with service design and development. That’s simply not true. In my book, I deliberately avoided this term, and instead opted for three timeframes of governance: pre-project, project, and run-time. There’s a lot more that goes on before run-time than design, and these activities still need to be governed. It is true that if you’re leveraging an external provider, you don’t have any need to govern the development practices. You do, however, still need to govern:

  • The processes that led to the decision of what provider to use.
  • The processes that define the service contract between you and the provider, both the functional interface and the non-functional aspects.
  • The processes executed when you add additional consumers at your organization of externally provided services.

For example, how is the company deciding what service provider to use? How is the company making sure decisions by multiple groups for similar capabilities are in line with company principles? How is the company making sure that interoperability and security needs are properly addressed, rather than being left at the whim of what the provider dictates? What happens when a second consumer starts using the service, yet the bills were being sent to the first consumer? Does the providers service model align with the company’s desired service model? Does the provider’s functional interface create undue transformation and integration work for the company? These are all governance issues that do not go away when you switch to IaaS, SaaS, or PaaS. You will need to ensure that your teams are aware of the contracts in place, and don’t start sending service requests without being properly onboarded into the contractual relationship. Your internal allocation of charges takes multiple consumers into account, if necessary. All of these must happen before the first requests are sent in production, so the notion that run-time governance is the only governance concern in a cloud computing scenario is simply not true.

A final point I’m adding on after some conversation with Lori MacVittie of F5 on Twitter. Let’s not forget that someone still needs to build and provide these services. If you’re a service provider, clearly, you still have technical, design-time governance needs in addition to everything else discussed earlier.

IT Needs To Be More Advisory

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

Sometimes a simple message can be the most powerful. In a recent discussion on a SOA Consortium phone call, I made the comment that IT needs to shift its mentality from provider to advisor. I hope that most people read this and view it as completely obvious, but recognizing the fact and actually executing against it are two different stories.

Let’s look at two trends that I think are pretty hard to argue against in most corporate IT organizations. The first trend is to build less stuff. Building less means that we’re either reusing stuff we already have, or acquiring stuff from some other source and configuring (not customizing) it to meet our needs. This could be free stuff, COTS, SaaS, etc. The point is, there is no software development required.

The second trend can be summed up as buy fewer (none) servers. Virtualization, data center consolidation/elimination, and cloud computing all have ties back to this trend, but the end result is that no one wants to build new data centers unless data centers are your business.

So, if you accept these trends as real, this means that IT isn’t providing applications and IT isn’t providing infrastructure. Then what is IT providing? I would argue that rather than looking for something new to “provide,” IT needs to change its fundamental thinking from provider to advisor or be at risk of becoming irrelevant.

Am I simply stating the obvious? Well, to some extent, I hope so. What should also be obvious, however, is that this change in role at an organizational level won’t happen by accident. While an individual may be able to do this, redefining an entire organization around the concept is a different story. A provider typically needs to only understand their side of the equation, that is, what they’re providing. If you’re a software developer, you understand software development, and you sit back and wait for someone to give you a software development task. Often times, the provider may establish some set offerings, and it’s up to the consumer to decide if those offerings meet their needs or not. An advisor, on the other hand, must understand both sides of the problem: the needs of the consumer and the offerings of the provider.

To illustrate this, take an example from the world of financial services. A broker may simply be someone you call up and say, “Buy 100 shares of APPL at no more than $200.” They are a provider of stock transaction services. A financial advisor on the other hand, should be asking about what your needs are, and matching those against the various financial offerings they have at their disposal. If they don’t understand client needs or if they don’t understand the financial offerings, you’re at risk of getting something sub-optimal.

IT needs to shift from being the technology provider to the technology advisor. Will people outside of IT continue to hand-pick technology solutions without the right breadth of knowledge? Sure, just as people today go out and buy some stock without proper thought of whether that’s really the right investment for them or just what everyone else is doing. The value add that IT needs to offer is the same that financial consultants do. We provide significantly better depth of understanding of the technology domains, along with the right amount of understanding of the business domains of our companies to advise the organization on technology decisions. For the non-IT worker that loves technology, it should be seen as validation of their efforts (not a roadblock).

The final message on all of this, however, is that I believe architecture plays a critical role. To actually build an advisory organization, you must categorize both the technology and the business into manageable domains where people can build expertise. Where does this categorization come from? Architecture. Taking a project-first approach is backwards. Projects should not define the categories and the architecture, the categories and desired architecture should define the projects.

So, you can see that this simple concept really does represent a fundamental shift in the way of thinking that needs to occur, and it’s one that’s not going to happen overnight. If you’re a CIO, it’s time to get this message out and start defining the steps you need to take to move your organization from a provider to an advisor.

Book Review: Cloud Computing and SOA Convergence in Your Enterprise by David Linthicum

Full disclosure: I was provided a review copy of this book by the publisher free of charge. From the back cover: “Cloud Computing as SOA Convergence in Your Enterprise offers a clear-eyed assessment of the challenges associated with this new world–and offers a step-by-step program for getting there with maximum return on investment and minimum risk.”

My review in a nutshell: This is a very well-written, easy-to-read book, targeted at IT managers, that provides a robust overview of Cloud Computing and its relationship to SOA, and the core basics of a game plan for leveraging it.

This book was an extremely easy read, which is to be expected of any book from Dave, based upon the easy to read style of his InfoWorld blog. He provides a taxonomy of cloud offerings, extending the typical three categories (Software-as-a-Service, Platform-as-a-Service, and Infrastructure-as-a-Service) to eleven. While some may think eleven is too many, the fact remains that a taxonomy is necessary as a core starting point, and Dave provides solid definitions for each category that an organization can choose to use. He goes on to provide a financial model to consider for making your cloud decisions, but correctly states that cost is only one factor in the decision making process. He provides the other dimensions that should be associated with your decision making process in equal detail. In chapters 5 through 10, he walks through steps associated with moving services, data, processes, governance, and testing into a cloud environment. Dave’s steps for these chapters are very straightforward. That being said, Dave does not sugarcoat the fact that these steps are not always easy to do, and your success (or lack of it) is highly dependent on how large of a domain you choose to attack.

For someone who has researched SOA and Cloud Computing in detail, this book may not provide a lot of new information, but what it does provide, is a straightforward process for organizing your effort and making progress. Often times, that can be the biggest challenge. For this reason, I do think the book is more geared toward the management side of IT, and less so toward the technical side (architects and developers), but as an architect, I did find the taxonomies presented valuable. The only area for improvement that I saw would have been a stronger emphasis on the role the service model must play in the selection process, and stronger emphasis on having service managers inside your IT organization. Dave discussed both of these topics, however, to make stronger ties between SOA and Cloud Computing (or even ITIL and Cloud Computing), these points could have been emphasized more strongly. Choosing the right cloud provider requires that you have solid requirements on what you need, which comes from your service model. Ensuring that your requirements continue to be met and don’t get transformed into what the service provider would prefer to offer requires solid service management on your side.

Any cloud computing initiative will require that everyone involved have a base level of understanding of the goals to be achieved and the process for doing it. This book can help your staff gain that base understanding.

Governing Anonymous Service Consumers

On Friday, the SOA Chief (Tim Vibbert), Brenda Michelson, and I had a conversation on Twitter regarding SOA governance and anonymous service consumers. Specifically, how do you provide run-time governance for a service that is accessed anonymously?

If you’ve read this blog or my book, you’ll know that my take on run-time SOA governance is the enforcement and/or monitoring of compliance with the policies contained within the service contract. Therein lies the biggest problem: if the service consumer is anonymous, is there a contract? There’s certainly the functional interface, which is part of the contract, but there isn’t any agreement on the allowed request rates, hours of usage, etc. So what do we do?

The first thing to recognize is that while there may not be a formal contract that all consumers have agreed to, there should always be an implied contract. When two parties come to the table to establish an agreement, it’s likely that both sides comes with a contract proposal, and the final contract is a negotiation between the two. The same thing must be considered here. If someone starts using a service, they have some implicit level of service that they expect to receive. Likewise, the service provider knows both the capacity they currently can handle as well as what how they think a typical consumer will use the service. Unfortunately, these implied contracts can frequently be wrong. The advice here is that even if you are trying to lower the barrier for entry by having anonymous access, you still need to think about service contracts and design to meet some base level of availability.

The second thing to do, which may seem obvious, is to avoid anonymous access in the first place. It’s very hard to enforce anything when you don’t know where it’s coming from. Your authorization policy can simply be that you must be an authenticated user to use the service. Even in an internal setting, having some form of identity on the message, even if there are no authentication or authorization policies, becomes critical when you’re trying to understand how the systems are interacting, perform capacity planning, and especially in a troubleshooting scenario. Even services with low barriers to entry, like the Twitter API, often require identity.

The next thing you should do is leverage a platform with elasticity. That is, the available capacity should grow and shrink with the demand. If it’s anonymous, and new consumers can start using it simply by getting the URLs from someone else, you have no control over the rate at which usage will scale. If the implied level of availability is that the service is always available, you’ll need on-demand resources.

Finally, you still need to protect your systems. No request is completely anonymous, and there are things you can do to ensure the availability of your service against rogue consumers. Requests will have source IP addresses on them, so you can look for bad behavior at that level. You can still do schema validation, look for SQL injection, etc. In other words, you still need to do DoS protection. You also should be looking at the usage metrics on a frequent basis to understand the demand curve, and making decisions accordingly.

Is Twitter the Cloud Bus?

ToddPoken.jpg

Courtesy of Michael Coté, I received a Poken in the mail as one of the lucky listeners to his IT Management and RIA Weekly podcasts. I had to explain to my oldest daughter (and my wife), what a Poken is, and how it’s utterly useless until I run into someone else in St. Louis who happens to have one or go to some conference where someone might have one. Oh well. My oldest daughter was also disappointed that I didn’t get the panda one when she saw it on the website. So, if you happen to own a Poken, and plan on being in St. Louis anytime soon, or if you’re going to be attending a conference that I will be at (sorry, nothing planned in the near future), send me a tweet and we can actually test out this Poken thing.

Speaking of the RIA Weekly podcast, thanks to Ryan Stewart and Coté for the shout-out in episode #46 about my post on RIAs and Portals that was inspired by a past RIA Weekly podcast. More important than the shout-out, however, was the discussion they had with Jeff Haynie of Appcelerator. The three of them got into a conversation about the role of SOA on the desktop, which was very interesting. It was nice to hear someone thinking about things like inter-application communication on the desktop, since the integration has been so focused on the server side for many years. What really got me thinking was Coté’s comment that you can’t build an RIA these days without including a Twitter client inside of it. At first, I was thinking about the need for a standard way for inter-application communication in the RIA world. Way back when, Microsoft and Apple were duking it out over competing ways of getting desktop apps to communicate with each other (remember OpenDoc and OLE?). Now that the pendulum is swinging back toward the world of rich UI’s, it won’t surprise me at all if the conversation around inter-application communication for desktop apps comes up again. What’s needed? Just a simple message bus to create a communication pathway.

In reality, it’s actually several message buses. An application can leverage an internal bus for communication with its own components, a desktop/VM-based bus for communication with other apps on the same host, another bus for communication within a local networking domain, and then possibly a bus in the clouds for communication across domains. Combining this with Coté’s comment made me think, “Why not Twitter?” As Coté suggested, many applications are embedding Twitter clients in them. The direct messaging capability allows point-to-point communication, and the public tweets can act as a general pub-sub event bus. In fact, this is already occurring today. Today, Andrew McAfee tweeted about productivity tools on the iPhone (and elsewhere), and a suggestion was made about Remember The Milk, a web-based GTD program with an iPhone client, and a very open integration model which includes the ability to listen to tweets on Twitter that allow you to add new tasks. There’s a lightweight protocol to follow within the tweet, but for basic stuff, it’s as simple as “d rtm buy tickets in 2 days”. Therefore, if someone is using RTM for task management, some other system can send a tweet to RTM to assign a talk to a Twitter user. The friend/follower structure of Twitter provides a rudimentary security model, but all-in-all, it seems to work with a very low barrier to entry. That’s just cool. Based on this example, I think it’s entirely possible that we’ll start seeing cloud-based applications that rely on Twitter as the messaging bus for communication.

Fundamental Question on Virtualization

Since I first learned a bit about virtualization, there’s been one question that I’ve had that still keeps nagging me: isn’t this what operating systems were originally supposed to do? Back in my undergraduate days in the Computer Science department at the University of Illinois at Urbana-Champaign, I took a course in operating systems, and I seem to recall it being all about the allocation of memory, I/O, storage, and processor cycles among processes. This seems to be the exact same problem that virtualization is trying to achieve. About the only differences I can see is that virtualization, at least on the server side, does try to go across physical boundaries with things like VMWare’s VMotion, and it also allows us to avoid having to add physical resources just because one system requires Windows Server while another requires SuSE Linux.

So, back to the question. Did we simply screw up our operating systems so badly with so much bloat that they couldn’t effectively allocate resources? If so, you could argue that a new approach that removes all the bloat may be needed. That doesn’t necessarily require virtualization, however. There’s no reason why better resource management couldn’t be placed directly into the operating system. Either way, this path at least has the potential to provide benefits, because the potential value is more heavily based on the technology capabilities, rather than on how we leverage that technology.

In contrast, if the current state has nothing to do with the operating systems capabilities, and more about how we choose to allocate systems to those resources, then will virtualization make things any better? Put another way, how much of the potential value in applying virtualization is dependent on our ability to properly configure the VMs? If that number is significant, we may be in trouble.

This is also a key point of discussion as people look into cloud computing. The arguments are again based on economies of scale, but the value is heavily dependent on the ability to efficiently allocate the resources. If the fundamental problem is in the technology capabilities, then we should eventually see solutions that allow for both public-cloud computing as well as private-cloud computing (treat your internal data center as you own private cloud). If the problem is not the technology, then we’re at risk of taking our problems and making them someone else’s problem, which may not actually lead to a better situation.

What are your thoughts on this? Virtualization isn’t something I think about a lot, so I’m open to input on this. So far, the most interesting thing for me has been hearing about products that are designed to run on a hypervisor directly, which removes all of the OS bloat. The risk is that 15 years from now, we’ll repeat this cycle again.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.