Archive for the ‘SOA’ Category
In this session, David Hood, CEO of Troux, talked about Troux’s approach to the market. He began by presenting “Zucca’s Equations”:
W2A: Where we are
W3TG: Where we want to go
HW(GT)2: How are we going to get there
He called out that the need for EA is increasing at a rate much higher than the maturity of the EA discipline, creating a gap. To fill that gap, he saw 4 keys to success:
- Start with the end in mind
- Make it repeatable
- Make it easy to consume
- Some parenting is required
In the section on censurability, David talked about visualizations, citing both the Facebook connections visualization (as an example of a beautiful visualization) and the Billion dollar-o-gram (as an example of a disruptive visualization). Visualizations of the information are important to the practice of Enterprise Architecture.
He ended with a slide containing a picture of Mary Poppins, indicating that sometimes a spoonful of sugar will be necessary to make the medicine go down. I think this goes hand-in-hand with making things consumable. Create a consumable version (sugar) of the information to get the bring light to the important information necessary (medicine) for our decisions that may have been an unknown item before.
Imran Qayyum, an EA for Cisco, was the presenter for this session. They used the Proact BOST framework (Business Operations Systems and Technology) to get started, providing an industry specific reference model.
Imran’s team is responsible for development services, which covers all sorts of software development technologies necessary for the Cisco Engineering teams to design and build cool stuff. They gave a good demonstration of how they are using Troux, walking through how they’ve defined their domain in terms of capabilities and business functions delivered to their customers, including roadmaps for the technologies and applications that support each business function.
In his wrap-up, he mentioned that they will be looking into integration with PPM and CMDB technologies. I believe that as EA tools increasingly move toward decision support, integration among these different systems will become increasingly important, especially ones that provide financial information, since that’s a huge component of decision making for business leaders.
Sandra McCoy, Executive Director for Enterprise Architecture with Kaiser Permanente, gave this talk, subtitled as “Architecting Standardization in a Complex Healthcare Organization.”
She mentioned that the challenges her team faces are that architects would like to work in a nice orderly fashion, but the environment never allows for that. It’s an uphill battle, with blind curves, treacherous consequences, insufficient resources, etc.
They chose to start with standards, focused on defining where they wanted to go, and less so on defining where they currently are. Standards must be clearly visible and easy to find. Their concepts must be easily understood, and they must be an enabler. They need to be marketed as an enabler and a safeguard.
One interesting anecdote in the discussion was that they initially created a big excel spreadsheet with their standards and got challenged by stakeholders to do something more innovative/cutting edge. It’s a great example that we’re always marketing ourselves in everything we do.
She reinforced the earlier points from Bill Cason and Warren Ritchie that we have to be able to describe our assets and resource to demonstrably show dependencies and impacts to business leaders as part of the decision making process.
In the Q&A portion, one person had caught that she had a box labeled “Innovation Standards.” This is the category for technologies that are under investigation that they wanted to track, including the results, to avoid having a bunch of people looking at the same thing in two or more different areas, or to also make the results of prior investigations available to others.
Her lessons learned were:
- Go slow to go fast: plan your approach (before you start talking to people), architect the 5 year vision, and create templates, messaging, RACIs and roaadmaps.
- Don’t market what you can’t support: Their repository has sold itself, people want the results but not the work (organization needed to participate to provide data for the repository), and create a collaborative approach.
- Don’t forget to practice what you preach: Architect first, ensure the EA tools and stack are standards, and architect a comprehensive enterprise architecture solution.
Warren Ritchie, CIO for Volkswagen Group of America, presented this session and told us about his personal journey with EA, its utility in a globalization concept, and how the EA program at VW Group of America provided initial benefits. He believes that EA is the missing piece in strategy management.
In his PhD dissertation (he has a PhD in Business Strategy), he looked for a more compelling explanation of why firms do or do not take advantage of a strategic opportunity: strategic choice or structural inertia? He found that small companies with a simple structure moved fast, medium size organizations (complex single business) were the slowest, and large, multi-business firms moved fast (possibly via a spinoff). The research pointed toward structural inertia being a key factor in whether or not a company took advantage of an opportunity.
So, if strategy implementation is about manipulating the structure, we cannot manage internal resources, products, and services as if they are independent things. If you can’t describe internal resource structure, you can’t implement strategy.
He went on the discuss VW’s strategy around sales and marketing globalization. I thought it was great that his diagram clearly showed areas for horizontal integration versus vertical integration. He talked about the notion of a modular platform for building cars, and how it was flexible and highly scalable. We need to do the same thing for organizations. Create a global core that can be customized for the particular regions. 70% of processes are global, 30% are local.
On speed to benefits, he suggested getting started with a specific “failure is not an option” project. Points on his slide were:
- Layout the EA framework
- Populate the repository as a trailing edge activity early
- Re-Use repository content for later phases
- First efforts are labor intensive, but returns start quickly
The project they targeted was a transition in their application management service supplier. A big step was that they took the tacit knowledge that was known in the incumbent supplier, and made that information explicit in an EA repository. They are now faster with projects because they know what things connect to each other, and by being faster, projects now cost less. The EA practice is now in high demand within the IT department and starting with the business units. The ROI they’ve achieved is well over 100% on risk mitigation alone. They are in a position where they can go to suppliers with their model, and those suppliers map to it.
The overall conclusions were:
- Organizations that are better able to describe themselves are more adaptive to market opportunity.
- Globalization strategy of sales and marketing requires a rigorous description of organization’s internal structure.
- After an initial investment in EA, each incremental investment is resulting in disproportionately positive returns.
The first session I’m in is one of the few sessions by Troux staff at the conference, being given by Bill Cason, CTO of Troux. Always nice to see a CTO doing these things.
Bill went over the IT strategy planning process, involving the business, the office of the CIO, and enterprise architecture. There was a good slide that visually showed the process as Troux sees it that I’ll try to get and repost here. Update: here’s the slide:
Early on, he emphasized the need to capture the business context in the forms of goals and strategies, as well as to have EA be the keeper of business capability maps, defined through collaboration with the business. These capability maps are what allow for fact-based conversations that aid the decision making process.
Bill’s Top 10 things to do:
- Help your boss. Their surveys show that most CIOs are managing strategy on their own.
- Establish sustainable processes. Get sponsorship, establish stewardship, define all roles and establish them, and get agreement on governance and measures.
- Involve the business
- Understand how organizations think. IT thinks about IT assets and activities, the business thinks about business capabilities. The capability map must align the IT portfolio with the business capabilities.
- Recognize what tools they use. Business has lots of manual methods, IT has some automation.
- Establish business context. If you can’t involve the business, at least try to understand their objectives.
- Start with the business questions. What does the business want to do? Where should it transform? How are the transformations progressing?
- Focus on key areas. “Burning platforms” M&A, divestitures, regulatory drivers, time to market. This helps focus the work and make it more digestible. Near time focus, grow scope over time.
- Influence departmental behavior. Link business roadmaps to IT roadmaps, communicate the dependencies and impacts.
- Be flexible. Executives will never use EA models.
Alex Cullen of Forrester posted an interesting blog regarding the “inevitable trend” of business empowerment. At the recent Forrester EA Forum, they invited attendees to rate on a 1-5 scale, whether or not “the EA function has close ties with business management” and whether or not the “technology strategy and standards allow for rapidly changing technologies.” Not surprisingly, in my opinion, only 33% of respondents answered positively on the first question, and only 27% answered positively on the second question.
The combination of these two questions is very interesting. I’m very pragmatic when it comes to discussions about standards. Arbitrary standards may allow someone to fill out a compliance dashboard or meet their personal or team objectives, but the majority of those standards may not have any positive impact on the company’s strategy and goals, and in fact, may be an inhibitor. At the same time, this same linkage to strategy and goals must also apply to the use of new technologies. Someone needs to be the enterprise parent that asks the question, “do you really need that?” It may be a shiny new thing, but does it make a difference in the ability to accomplish the strategy and goals?
This is by no means an easy problem. Yes, sometimes there are very clear cost-cutting initiatives that make it easy to drive certain standards decisions. Sometimes, things are not so clear. For example, I have had conversations with people in the financial services industry who told me that the technology available to the financial consultants was important for recruiting. A company with slow technology adoption processes could be at risk of losing their top consultants, or failing to attract new ones, which winds up having a direct impact on company revenues.
The end result of all of this is that work on these two items must go hand in hand. You can’t hope to establish standards in the right areas if you don’t have an intimate knowledge of the business strategy and goals. You can’t have that intimate knowledge unless you have strong ties with business management. If the enterprise architecture team does not have these strong ties, they’re going to have to balance second-hand explanations against the potentially disconnected goals of the department to which they report (e.g. IT).
The right model for EA has to be that of the trusted advisor. I’ve commented about this previously in “Enterprise Architect: Advisor versus Gatekeeper” and “IT Needs To Be More Advisory”. When EA is being grown from within IT, a big challenge may be to establish that trust. Like it or not, the EA team may be carrying the baggage of the entire IT department with you. Trust is earned, so we need to find a way to establish one or more strong relationships with key business leaders, and then use those relationships to scale from there.
On Twitter, Scott Ambler (@scottwambler) posted:
Effective IT governance is based on motivation and enablement, not command and control.
At first glance, it would be hard to argue with this statement. In general, most people don’t like command and control, and who wouldn’t prefer the carrot over the stick? Having thought a lot about governance (and written a lot, too hint, hint) I had to go deeper on what effective governance really entails. I’ve seen situations where a command and control governing style has succeeded and ones where it has failed. I’ve seen the same thing for motivation and enablement styles, as well. So what really is the key?
In situations where things turned out good for the company, it was because the organization, as a whole, all saw things in the same way. They understood the strategic priorities and goals and balanced these against departmental or project priorities goals in an appropriate way. Where things turned out bad is where those priorities and goals were not well understood, if they even existed at all. In other words, everyone had their own opinion on what the right thing to do was. In general, people always had good intentions with the decisions that they made, but the criteria they decided to use to choose the best approach was not consistent from person to person or team to team. In the absence of this understanding, neither command and control or motivation and enablement will fix things. Put in a bunch of commanders that don’t have a shared understanding, and you get a power struggle. Likewise, if we simply try to remove barriers and “enable” people, that is not going to help when accomplishing goals involves cross-project or cross-organizational efforts.
To have effective governance, you must first have clarity and a shared understanding of the goals and strategy around your efforts. If you don’t have this, then you may need to begin with a heavier command and control approach to not only get the word out, but to ensure that it sinks in. Once you build that shared understanding, a shift to a focus on enablement is certainly in order. If the understanding has taken root, people don’t need to be controlled anymore, and that should be your goal. Along with this, however, your people must be able to recognize when things fall into the grey area. As part of being enabled, there needs to be a trust factor that people will make those grey areas known to the command structure, even perhaps with options that look at things from both a micro and macro level. The command structure, in turn, must make decisions in an efficient manner, and then work its communication processes to augment the shared understanding in the organization.
If I had to put effective IT governance in a nutshell, it’s all about communication. If you communication is great, which means that you effectively communicate not just the direction but the reasons behind it, and you have a feedback process for discussing it with people who may disagree with it, you’re likely to have effective governance.
In this, my first blog post of 2011, I’d like to issue a challenge to the blogosphere to make 2011 the year of the event. There was no shortage of discussions about services in the 2000′s, let’s have the same type of focus and advances in event’s in the 2010′s.
How many of your systems are designed to issue event notifications to other systems when information is updated? In my own personal experience, this is not a common pattern. Instead, what I more frequently see is systems that always query a data source (even though it may be an expensive operation) because a change may have occurred, even though 99% of the time, the data hasn’t. Rather than optimizing the system to perform as well as possible for the majority of the requests by caching the information to optimize retrieval, the systems are designed to avoid showing stale data, which can have a significant performance impact when going back to the source(s) is an expensive operation.
With so much focus on web-based systems, many have settled into a request/response type of thinking, and haven’t embraced the nearly real-time world. I call it nearly real-time, because truly real-time is really an edge case. Yes, there are situations where real-time is really needed, but for most things, nearly real-time is good enough. In the request/response world, our thinking tends to be omni-directional. I need data from you, so I ask you for it, and you send me a response. If I don’t initiate the conversation, I hear nothing from you.
This thinking needs to broaden to where a dependency means that information exchanges are initiated in both directions. When the data is updated, an event is published, and dependent systems can choose to perform actions. In this model, a dependent system could keep an optimized copy of the information it needs, and create update processes based upon the receipt of the event. This could save lots of unnecessary communication and improve the performance of the systems.
This isn’t anything new. Scalable business systems in the pre-web days leveraged asynchronous communication extensively. User interface frameworks leveraged event-based communication extensively. It should be commonplace by now to look at a solution and inquire about the services it exposes and uses, but is it commonplace to ask about the events it creates or needs?
Unfortunately, there is still a big hurdle. There is no standard channel for publishing and receiving events. We have enterprise messaging systems, but access to those systems isn’t normally a part of the standard framework for an application. We need something incredibly simple, using tools that are readily available in big enterprise platforms as well as emerging development languages. Why can’t a system simply “follow” another system and tap into the event stream looking for appropriately tagged messages? Yes, there are delivery concerns in many situations, but don’t let a need for guaranteed delivery so overburden the ability to get on the bus that designers just forsake an event-based model completely. I’d much rather see a solution embrace events and do something different like using a Twitter-like system (or even Twitter itself, complete with its availability challenges) for event broadcast and reception, than to continue down the path of unnecessary queries back to a master and nightly jobs that push data around. Let’s make 2011 the year that kick-started the event based movement in our solutions.
According to ZDNet’s Joe McKendrick’s coverage of the recent Gartner Application Architecture, Development, and Integration summit, SOA governance and siloed thinking is top of mind.
If this really is the case, how do we make our governance efforts more effective? The more I think about this, the more I come back to a recent post of mine from earlier this year: “Want Successful Enterprise Architecture? Define ‘Enterprise’ First.” I’m convinced that this is a critical step for any effort that tries to go beyond a project-level scope, SOA initiatives included. If you don’t provide a structure that says what things will be implemented and managed at an enterprise level, versus a domain level or project/team level, anything with the term “enterprise” will be a struggle.
Too often, the approach to governance is concerned with establishing oversight, not establishing outcomes that are rooted in an agreed upon definition of what will be managed at an enterprise level, a domain level, and at the project level. Does it really help to set a standard that a particular coding library must be used when there is no central team that manages the library, no centralized support team, and no stated strategy for developer portability across projects? No, it just gets people up in arms and accusations of EA or the governance team being an ivory tower that sets arbitrary standards.
In my book, I defined governance as the combination of people, policies, and processes that are put in place to ensure the organization achieves one or more desired behaviors and outcomes. It’s not there to simply have a check mark to that says, “I went through a review.” In the absence of clear desired behaviors and outcomes, that’s what you will have. There is no reason to have an enterprise architecture team review a project if there are no things that are managed (or desired to be managed) at an enterprise level. You need to have some idea of what those things are up front, along with a mechanism for quickly making decisions on new candidates for enterprise items. The project team must know that this analysis will be done, and that it is a necessary part of achieving the company’s strategic goals, which they should be well aware of. Lack of communication of these goals can be just as detrimental and is often a symptom of lack of agreement on enterprise goals or inadequately specified goals: “Sure, we need to cut our IT costs by sharing more systems. I’m all for it as long as they’re not mine.” Someone needs to define exactly what the target areas are.
To be successful, we must define the desired outcome first. We must clearly establish the list of things that must be managed at an enterprise level, a divisional level, or left to the discretion of individual projects/teams. In fact, it’s even more fundamental than this. We can’t even know what success is without doing this step. There were no shortage of companies in the past that stated they were adopting SOA, my question to them would be, “How do you know when you’ve been successful?” Simply having a bunch of services doesn’t mean you’ve adopted SOA, it has to be the right services. Too often, enterprise architecture teams are positioned for failure because this fundamental step has not happened. Before you task your enterprise architecture team with reviewing all projects, make sure you’ve defined what enterprise is. If you haven’t, task your enterprise architecture team with doing the analysis of what’s out there and coming up with some recommendations. Then, your governance program will actually have a desired outcome to use in their reviews.
On Twitter, Brenda Michelson of Elemental Links started a conversation with the question:
Do #entarch frameworks enable or constrain practice of (value from) enterprise architecture?
In my comments back to Brenda, it became clear to me that there’s a trap that many teams fall into, not just Enterprise Architecture, and that’s falling into an inward view, rather than an outward view.
As an example, I worked with a team once that was responsible for the creation, delivery, and evolution of data access services. Over time, teams that needed these services were expressing frustration that the services available were not meeting their needs. They could eventually get what they needed, but in a less than efficient manner. The problem was that the data services team primary goal was to minimize the number of services they created and managed. In other words, they wanted to make their job as easy as possible. In doing so, they made the job of their customers more and more difficult. This team had an inward view. It’s very easy to fall into this trap, as performance objectives frequently come from internally measured items, not from the view of the customer.
EA teams that obsess over the adoption of EA frameworks fall into the same category. Can EA frameworks be a valuable tool? Absolutely. But if your primary objective becomes proper adoption of the framework versus delivering value to your customers, you have now fallen into an internal view of your world, which is a recipe for failure.
Instead, teams should strive to maintain a service mentality. The primary focus should always be on delivering value to your customers. There’s a huge emphasis on EA becoming more relevant to the business, in order to do so, we need to deliver things that fit into the context of the business and how they currently make decisions. If we walk in preaching that they need to change their entire decision making process to conform to a framework, you’ll be shown the door. You must understand that you are providing a service to the teams you work with and helping them get their job done better that they could without you. While a framework can help, that should never be your primary focus. Internal optimizations of your process should be a secondary focus. In short, focus on what you deliver first, how you deliver it second. If you deliver useless information efficiently, it doesn’t do anyone any good.
In the Wired magazine article on the relationship between AT&T and Apple (see: Bad Connection: Inside the iPhone Network Meltdown), the author, Fred Vogelstein, presents a classic service management problem.
In the early days of the iPhone, when data usage was coming in at levels 50% higher than what AT&T projected, AT&T Senior VP Kris Renne came to Apple and asked if they could help throttle back the traffic. Apple consistently responded that they were not going to mess up the consumer experience to make the AT&T network tenable.
In this conversation, AT&T fell into the trap that many service providers do: focusing on their internal needs rather than that of the customer. Their service was failing, and the first response was to try to change the behavior of their consumers to match what their service was providing, not to change the service to what the consumer needs.
I’ve seen this happen in the enterprise. A team whose role was to deliver shared services became more focused on minimizing the number of services provided (which admittedly made their job easier) than on providing what the customers needed. As a result, frustration ensued, consumers were unhappy and were increasingly unwilling to use the services. While not the case in this situation, an even worse possibility is where that service provider is the only choice for the consumer. They become resigned to poor service, and the morale goes down.
It is very easy to fall into this trap. A move to shared services is typically driven by a desire to reduce costs, and the fewer services a team has to manage, the lower their costs can be. This cannot be done at the expense of the consumer though. First and foremost, your consumers must be happy, and consumer satisfaction must be part of the evaluation process of shared service teams. Balance that appropriately with financial goals, and you’ll be in a better position for success.
Barnes and Noble has introduced a $149 Wi-Fi version of its Nook eReader. This has now reached a price point where I think parents may consider purchasing one for their children. Having recently moved, I know where my budget for book purchases has gone recently: kids books. This ranges from learning to read books all the way up to the several-hundred-page series books like Harry Potter and Percy Jackson. While there’s no easy way to get all of these existing books onto an eReader (I think demand would shoot into the stratosphere if there was), there’s certainly no shortage of new book purchases in the future, either. So what would make a great kids eReader?
First, I think existing eReaders like the Nook or Kindle are probably fine for the Harry Potter/Percy Jackson age group, say 9 and up. They should have no problem using the device, it’s more a question of taking care of the device. For the under 6 age group, I don’t think current eInk screens are going to provide the right amount of visual stimulation, so at best, it’s probably a device best used while your child is in your lap and you’re reading to them. They’ll pick up the interface of the device, and be ready to go when they reach the chapter book stage of reading. The 7-8 age group is the trickier one. It’s going to get thrown into a school backpack, have who knows what smeared all of it from their hands, etc., so you get the point. The device needs to be of equivalent durability to a Nintendo DS. Most 7-8 year olds I know have one of these.
In terms of features, I think Barnes and Noble has it right with the WiFi only. The kids aren’t going to be purchasing books in airports- it’s a reading device. I’d even be okay with a device that only allows USB sync, but since I wouldn’t expect the removal of WiFi to change the price point, I’d rather have it than not. If you can give me a $100 price point with sync only capabilities, like an iPod Nano or Shuffle, even better. Purchasing from the device would need to be disabled at the discretion of the parent, especially with the one-click purchase approach of the Kindle. As a parent, I would prefer to go to a website, make the purchase, and then choose to deliver to my kids’ devices when they connect. Add in date-based delivery options, and friends and family could purchase presents that automatically show up on the kids’ birthdays, or we could even have link in to the North Pole and allow Santa to deliver them to the device on Christmas morning. eInk-based screens are a must, because the kids will forget to charge the device, so battery life is critical. Finally, we must be able to share books across multiple devices. I don’t want to have to buy separate copies of the latest book by Rick Riordan for each device, as my kids share the books now.
The real question is whether a dedicated device makes sense for your children. I think we’re looking at an age group of 7-11. From 12 and up, there’s a good chance your child will have an iPad/Netbook/Tablet/Laptop of their own with a screen space suitable for reading. Does the independent eReader get put on the shelf at that point? I know I have stopped using my Kindle now that I have the Kindle app on my iPad. Personally, I think the answer to the question is still yes, even if only used for 5 years from ages 7 to 11. 5 years for any electronic device is a pretty good life span. We spend $150 on a NintendoDS for probably 5 years of use, why wouldn’t we do the same for an eReader with more educational value? As long as there’s a software version of the reader for the multi-purpose device, all their books can go with them.
The final piece of the puzzle would be to have Scholastic tie their school book programs into this. Parents should be able to purchase for any eReader from their website and have it tie into the classroom or school fund raising programs that they offer. While the vertically-integrated device and store models of Amazon and Barnes and Noble probably won’t allow purchases for other devices, a publisher-owned store should.
What looks to be a very simple question is actually a very tough one. The answer to this is of particular importance to a domain architecture team (a team whose scope is larger than a single project or solution), but the principles apply even to a solution architect. The solution architect has a slight advantage that they’re typically working with a team that has a single common goal: deliver the solution. Domain architects, however, must balance the delivery focus of project teams with setting the stage for systemic success across a broader portfolio of solutions, be it within a line of business or across the entire enterprise.
To me, architecture is about creating a categorization that establishes boundaries. These boundaries partition the solution into different areas. What’s the most frequent reason for partitioning? To create areas of responsibility. Within a project, your break things down to a sufficient level in order to be able to hand off units of work to individual developers or engineers, who now have responsibility for delivering that work. The biggest challenge is where those units of work overlap. When thinking of the typical Visio diagrams associated with architecture, this type of view is consistent with a boxes and lines view. We’re interested in what the boxes are and what’s on the lines (the interfaces and messages) that connect them.
While this boxes, lines, and responsibility approach works for both project and domain architects, there is one big, significant difference: the timeframe of responsibility. Once a project has been delivered, the development responsibilities typically go away. Your decisions on how partition the project are solely based on getting it delivered. A domain architect, however, is interested the full lifecycle of responsibility for a component. It’s not just the initial development, but it’s the ongoing care and feeding, the onboarding of new consumers, etc. If we don’t partition things to support future change, the pain involved in supporting that change will be high. The desire to partition things to allow for an efficiently managed portfolio may not be the same partitioning that allows for the most efficient development. These needs have to be balanced. In the perfect world, the partitioning for portfolio management could occur outside the context of any project, allowing the “optimal” partitioning to be used as an input by the project architect to balance these needs. In reality, that context doesn’t exist, and we’re doing our best to build it as we go along.
This type of approach can be challenging for domain architects when many people have the perception that the architect is the nuts and bolts person, looking at how things are built, rather than what gets built. That’s because many architects have gotten there by being a senior developer or engineer. I’m not suggesting that the “how” portion isn’t important, especially because the “how” decisions also have a lot to do with partitioning, but the “what” is increasingly important, because that ultimately defines what must be managed for the long term. If those units are difficult to change over time because of poor partitioning from a responsibility and ownership viewpoint, it will be a struggle.
What are your thoughts on what things are architecturally significant?
All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.
David Linthicum continued the conversation around design-time governance in cloud computing over at his InfoWorld blog. In it, he quoted my previous post, even though he chose to continue to use the design-time moniker. At least he quoted the paragraph where I state that I don’t like that term. He went on to state that I was “arguing for the notion of policy design,” which was certainly part of what I had to say, but definitely not the whole message. Finally, Dave made this statement:
The core issue that I have is with the real value of the technology, which just does not seem to be there. The fact is, you don’t need design-time service governance technology to define and define service policies.
Let’s first discuss the policy design comment. Dave is correct that I’m an advocate for policy-based service interactions. A service contract should be a collection of policies, most if not all of which will be focused on run-time interactions and can be enforced by run-time infrastructure. Taking a step backward, though, policy design is really a misnomer. I don’t think anyone really “designs” policies, they define them. Furthermore, the bulk of the definition that is required is probably just tweaking of the parameters in a template.
Now, moving to Dave’s second comment, he made it very clear that he was talking about governance technology, not the actual governance processes. Speaking from a technology perspective, I’ll agree that for policy management, which includes policy definition, all of the work is done through the management console of the run-time enforcement infrastructure. There are challenges with separation of concerns, since many tools are designed with a single administration team in mind (e.g. can your security people adjust security policies across services while your operations staff adjust resources consumption while your development team handles versioning, all without having the ability to step on each other’s toes or do things they’re not allowed to do?). Despite this, however, the tooling is very adequate for the vast majority (certainly better than 80-90% in my opinion) of enterprise use cases.
The final comment from me on this subject, however, gets back to my original post. Your SOA governance effort involves more than policy management and run-time interactions. Outside of run-time, the governance efforts has the closest ties to portfolio management efforts. How are you making your decisions on what to build and what to buy, whether provided as SaaS or in house? Certainly there is still a play for technology that support these efforts. The challenge, however, is that processes that support portfolio management activities vary widely from organization, so beyond a repository with a 80% complete schema for the service domain, there’s a lot of risk in trying to create tools to support it and be successful. How many companies actually practice systemic portfolio management versus “fire-drill” portfolio management, where a “portfolio” is produced on a once-a-year (or some other interval) basis in response to some event, and then ignored for the rest of the time, only to be rebuilt when the next drill occurs. Until these processes are more systemic, governance tools are going to continue to be add-ons to other more mature suites. SOA technologies tried to tie things to the run-time world. EA tools, on the other hand, are certainly moving beyond EA, and into the world of “ERP for IT” for lack of a better term. These tools won’t take over all corporate IT departments in the next 5 years, but I do think we’ll see increased utilization as IT continues its trend toward being a strategic advisor and manager of IT assets, and away from being the “sole provider.”
All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.
David Linthicum started a debate when he posted a blog with the attention grabbing headline of “Cloud computing will kill these 3 technologies.” One of the technologies listed was “design-time service governance.” This led to a response from K. Scott Morrison, CTO and Chief Architect at Layer 7, as well as a forum debate over at eBizQ. I added my own comments both to Scott’s post, as well the eBizQ forum, and thought I’d post my thoughts here.
First, there’s no doubt that the run-time governance space is important to cloud computing. Clearly, a service provider needs to have some form of gateway (logical or physical) that requests are channeled through to provide centralized capabilities like security, billing, metering, traffic shaping, etc. I’d also advocate that it makes sense for a service consumer to have an outgoing gateway, as well. If you are leveraging multiple external service providers, centralizing functions such as digital signatures, identity management, transformations, etc. makes a lot of sense. On top of that, there is no standard way of metering and billing usage yet, so having your own gateway where you can record your own view of service utilization and make sure that it’s line with the what the provider is seeing is a good thing.
The real problem with Dave’s statement is the notion that design-time governance is only concerned with service design and development. That’s simply not true. In my book, I deliberately avoided this term, and instead opted for three timeframes of governance: pre-project, project, and run-time. There’s a lot more that goes on before run-time than design, and these activities still need to be governed. It is true that if you’re leveraging an external provider, you don’t have any need to govern the development practices. You do, however, still need to govern:
- The processes that led to the decision of what provider to use.
- The processes that define the service contract between you and the provider, both the functional interface and the non-functional aspects.
- The processes executed when you add additional consumers at your organization of externally provided services.
For example, how is the company deciding what service provider to use? How is the company making sure decisions by multiple groups for similar capabilities are in line with company principles? How is the company making sure that interoperability and security needs are properly addressed, rather than being left at the whim of what the provider dictates? What happens when a second consumer starts using the service, yet the bills were being sent to the first consumer? Does the providers service model align with the company’s desired service model? Does the provider’s functional interface create undue transformation and integration work for the company? These are all governance issues that do not go away when you switch to IaaS, SaaS, or PaaS. You will need to ensure that your teams are aware of the contracts in place, and don’t start sending service requests without being properly onboarded into the contractual relationship. Your internal allocation of charges takes multiple consumers into account, if necessary. All of these must happen before the first requests are sent in production, so the notion that run-time governance is the only governance concern in a cloud computing scenario is simply not true.
A final point I’m adding on after some conversation with Lori MacVittie of F5 on Twitter. Let’s not forget that someone still needs to build and provide these services. If you’re a service provider, clearly, you still have technical, design-time governance needs in addition to everything else discussed earlier.