Author Archive

Kindle needs an iTunes equivalent

I’m an unabashed Kindle fan, but I’ve come to realize that there is room for improvement. The Kindle needs an iTunes equivalent. Not the store, but the desktop companion application. Clearly, Amazon’s original intent for it was as a book reader. For me, however, the usage parallels that of my first iPod. When the iPod came out, there was no iTunes store, but there was iTunes. You took your existing CDs, ripped them into iTunes, and put the content onto your iPod. While there’s no way to “rip” a physical book onto the Kindle, there’s still plenty of electronic content that I deal with each and every day that I’d love to have on the Kindle. Unfortunately, there’s not a good solution for that. Yes, you can email these documents to Amazon and pay a fee for wireless delivery (not cost effective for as many documents as I’d convert), or you can use a program like MobiPocket Creator or Stanza to convert them yourselves, which frankly, is a pain. What I want to be able to do is just drop a Word document or a PDF into a library in this application, and then the next time I connect my Kindle via USB, have those files converted and placed onto the device.

Why would I want to do this? I receive new documents probably every single day, ranging from 2-3 pages to 20 pages or more. Keeping them on the laptop usually means they don’t get read, and printing them out is a waste of paper, and also likely that they don’t get read, as I don’t have them with me when I have time to read. Putting them on the Kindle would be a great solution for me. Plus, the Kindle allows annotations, so if I need to document something to discuss with others, it can do that for me. If this were possible, I think it would open up the Kindle for the corporate market. If there were ways to tie a Kindle to an analyst firm subscription, even better. Amazon would really need to improve the content management on the device, because one big long list quickly becomes problematic, but that can be solved through software. So, Amazon, when can we see the desktop content manager for the Kindle, or are you leaving this up to a third party to provide?

EA Communications

This seems to be the topic of the week with excellent posts on the subject from both Leo de Sousa and Serge Thorn.  This has been on my to-do list since a meeting with Bruce Robertson where we discussed both my thinking on EA Services and Gartner’s, which also resulted in a blog post from Bruce. In that conversation, Bruce convinced me that communication should be a top level EA service, rather than an implied activity within all other EA services, as was my previous stance.  This was also challenged by Aleks Buterman in our discussions on Twitter.

Bruce’s stance was that communications was essential to everything that EA does.  If your EA team can’t communicate effectively, then their chances of success are greatly diminished.  By defining it as a top level EA service, it emphasizes its importance for not just the EA team, but everyone who utilizes the EA services. 

Given that assumption, what does the communication service look like?  I think Leo gave a great start at a communication plan.  In reality, the EA communication service shouldn’t be very different than any other communication service, the only difference is the subject being communicated.  Therefore, let’s learn from practices of the communications experts.  Leo relied on a communication plan created by a colleague that is used for many things, not just enterprise architecture.  I’m trying a simliar approach, initially based on a template from ganthead.com, but has now been customized quite a bit. In the plan we capture a number of items.  You’ll see there are many similarities to what Leo had to say, plus some additional items I think are important.  The plan is a simple Excel spreadsheet, with each row representing a unique “audience.”  These audiences do not have to be a mutually exclusive, in fact, it’s quite common to have one row targeted at a broad audience, and then other rows targeted at more narrow subgroups.  For each audience, the following things are captured:

  • Questions to answer / Information to present: In a nutshell, what are we trying to communicate to the audience?  What are the two or three key items to present?
  • Sensitivies: This one isn’t on Leo’s, but I think it’s very important.  What are the biases and background that the audience has that may positively or negatively impact the effectiveness of the communication?  For example, if your organization has tried the same initiative 5 different times, and you’re proposing the sixth, you should know that you’re walking into a room full of doubters.
  • Mechanism: How will the communication be delived?  This can involve multiple mechanisms including presentations, podcasts, webinars, blogs, whitepapers, etc.
  • Objective: What is the objective of the communication for the audience?  This is different than the information to present, this instead is the expected behavior you expect to see if the communication is successful.  Obviously, the communication alone may not achieve the objective, but it should represent a big step in that direction.
  • Author(s): Who will create the communications collateral?
  • Presenter(s): Who will present the information?
  • Delivery date(s): When will the communication be delivered, and if it’s an on-going process, at what frequency?  If there are mutliple delivery dates, when will the last delivery occur?
  • Evaluation Method: How will we evaluate the effectiveness of the communication?  There may be multiple evaluations.
  • Follow-up date: When will follow-up occur with the audience to gauge effectiveness and retention?

This may seem like a lot of formality, but I’ve seen the benefits of it first hand, and the risks associated with ad hoc communication efforts.  My experiences with ad hoc have been hit or miss, but when the time was taken to develop a formal plan, the efforts have always been successful.  I hope this helps you in your efforts.

Why I Blog

James McGovern, in one of his parting blogs (only time will tell if that’s really the case or not), asked some questions of me. Here’s his comment:

Todd Biske: I can count the number of peer enterprise architects from Fortune enterprises who are credible bloggers on one hand and I must say that your blog is the gold standard in this regard. Could you make your next blog entry on the topic of why you blog but more importantly provide a sense of why in a way that will encourage some of our other industry peers to come out of hiding? Please also share what other topics are of interest to you besides SOA? Curious to know if you have drunk the Kool-aid on $$$$ ECM technologies that really should cost $ or whether you have ever attended an OWASP meeting, etc?

First, thanks for the compliment. While we definitely have different styles, the fact that your blog exists is a continual incentive for me to continue to do so, as it is a sign that there are practicing EA’s who are also willing to share publicly. Furthermore, your efforts in calling attention to other bloggers through your blogs popularity helped a number of other corporate IT bloggers to build an audience which is critical for keeping the information flowing.

The reason I blog has always been very simple- sharing information. I have a hard time believing that there aren’t other people who are thinking about the exact same problems that I am. There are plenty of paid services that can provide access to “the network,” whether its buying a vendor’s product, paid analysis, consultants, etc. I have no problems with them trying to make a business out of it, but I also think there’s no reason why we shouldn’t be sharing information with our fellow peers for free. While there will always be work that must remain private for competitive reasons, how much of what we do every day really falls into that category? Does my recent post on EA services reveal anything private about my employer? No. Can the comments and discussion in public Internet forums help make those definitions better? Absolutely, and many organizations are already doing this, only through closed, paid networks. These networks provide great information, but you have to pay to join the club. So, that’s a long way of saying that I prefer to be an example of public sharing, and allow people to learn from my experiences in the hope that they’ll share some of their own. It’s been far more giving than receiving, but I’m 100% okay with that.

To your other questions… other topics of interest to me: anything of strategic nature with regards to the use of IT. I’m a big picture thinker, and always have been interested in the application of technology rather than the technology per se. I did a lot of work in usability in my early days for just this reason. I’m also very interested in the human aspects of things, taking a social psychology angle (just heard that term on a podcast and really liked it). Outside of that, the rest of my time revolves around faith and family in one way or another. On ECM technologies, I haven’t had to do a ton of work in this space, but I do know I haven’t drunk any Kool-aid. I’m fortunate to do some work for my kids’ school which has to leverage IT on a shoestring, so I’m able to keep an eye on some less expensive tools, including ECM recently. On OWASP, I have not attended a meeting, but have spoken with a colleague at work who wants to get a St. Louis group established. I will put him in touch with you to get some advice on getting the group going.

James, I hope you continue share your wisdom, and I’m sure you will, even if it’s not through the blogosphere anymore. Thanks for your part in making my blog better and building my audience.

EA Services: Part Two

Jeff Schneider posted the following comment to my previous post, “What are your EA Services?”

Your services feel too ‘responsive’ – like someone has to pick up the phone and call you in order for EA to be valuable. Thoughts?

I was hoping someone would ask a question along these lines, because it’s something I’ve thought a lot about. Services, by their nature, should seem very ‘responsive.’ After all, there should always be a service consumer and a service provider. The consumer asks, the provider does. In the context of an enterprise architecture team, or any other team for that matter, you do have to ask the question, “If no one is asking me to do this, why am I doing it?” Is that stance too theoretical, though?

When I think about this, I think services in this context (ITIL/ITSM) follow the same pattern that can be used for web services. There are services that are explicitly invoked at the request of a consumer, and then there are services that are executed in response to some event. In the latter case, the team providing the service is the one monitoring for the event. If some other team was monitoring for it, and then told your team to do something, then we’re back to the request/response style with one team acting as consumer and the other acting as the provider.

Coming back to the EA services, I think an Enterprise Architecture team will typically have a mix of request/response-style and event-driven services. It’s probably also true that if the EA team is closer to project activity, you’ll see more request/response services done on behalf of project teams, such as the Architectural Assessment Services and the Architectural Consulting Services I mentioned. If your EA team is more involved with portfolio management and strategic planning, then it’s possible that you may have more event-driven services, although these could still be request/response. Strategic direction could be driven by senior management, with direction handed down on areas of research. That direction represents a service request from that strategic planning group to the EA team. In an event-driven scenario, the EA team would be watching industry trends, metrics, and other things and making their own call on areas for research. It’s a subtle difference.

Anyway, I do think Jeff has a valid point, and as part of defining the services, you should classify the triggers that cause the service to be invoked. This will clearly capture whether they are event-driven or request/response. If your list is entirely request/repsonse, that’s probably something to look into. That may be what’s needed today, but you should be practicing good service management and planning for some event-driven services in the future if your team continues to provide value in the organization.

Thanks to Jeff Adkins for the good discussion about this on Twitter.

What are your EA Services?

A week or so ago, I asked about defining on EA services on Twitter. My use of the term services here is in more of the ITIL/ITSM sense, not what typically comes to mind when discussing SOA, but I could have another blog post just dedicated to that subject. I’ve been working to define EA services at work, and it’s been a very interesting exercise, and I had hoped that other EA’s (or former EA’s) on Twitter would have something to contribute.

Something that is a bit surprising to me, is that many IT teams struggle to explain exactly what they do, especially those whose primary purpose isn’t project work. This doesn’t mean that the team isn’t needed, but it does put you at risk of having the team be defined by the individuals rather than by their responsibilities. Depending on who the individual is, you get a different set of capabilities, making it difficult to quantify and measure what the team, rather than the individual does. A conversation with my boss and another team member on a different subject brought up the term “define by example” and we all agreed that it’s usually a bad thing. Examples are very important in illustrating the concept, but they shouldn’t be the definition. The same thing goes for a team. Your team should not by defined by individuals on it, but rather the individuals on it should be providing the defined services.

Getting back to the subject, the initial list of EA services I came up with, and vetted by Aleks Buterman, Leo de Sousa, and Brenda Michelson are:

  • Architectural Assessment Services: The operations in this service include anything that falls into the review/approve/comment category, whether required or requested. Ad hoc architectural questions probably go here, but those are one that I’m still sitting on the fence about.
  • Architectural Consulting Services: The operations in this service include anything where a member of the EA team is allocated to a project as a member of that project team, typically as a project architect. The day-to-day activities of that person would now be managed by a project manager, at least to the extent of the allocation.
  • Architectural Research Services: The operations in this service are those that fall into the research category, whether formal or informal. This would include vendor conversations, reading analyst reports, case study reviews, participation in consortiums, etc.
  • Architectural Reference Services: The operations in this service are those that entail the creation of reference material used for prescriptive guidance of activities outside of the EA team, such as patterns, reference models, reference architectures, etc.
  • Architectural Standards Services: Very similar to reference services, this service is about the creation of official standards. I’m still on the fence as to whether or not this should be collapsed into a single service with the reference services. Sometimes, standards are treated differently than other reference material, so I’m leaving it as its own service for now.
  • Architectural Strategy Services: Finally, strategy services capture the role of architecture in strategy development, such as the development of to-be architectures. If there is a separate strategy development process at your organization, this one represents the role of enterprise architecture in that process.

Now, the most interesting part of this process has not been coming up with this list, but thinking about the metadata that should be included for each of these services. Thinking like a developer, what are the inputs and outputs of each? Who can request them? Are any internal services (e.g. always requested by the EA manager) only, and which ones are external services (e.g. requested by someone outside of EA)? What are the processes behind these services? Are these services always part of a certain parent process, or are they “operations” in multiple processes? How do we measure these services? You can see why this suddenly feels very much like ITIL/ITSM, but it also has parallels to how we should think about services in the SOA sense, too. Thinking in the long term, all of these services need to be managed. What percentage of work falls into each bucket? Today, there may be a stronger need to establish solid project architecture, leading to a higher percentage of time spent consulting. Next year, it may shift to strategy services or some other category. The year after that, the service definitions themselves may need to be adjusted to account for a shift toward more business architecture and less technology architecture. Adjusting to the winds of change is what service management is all about.

So, my question to my readers is, what are your EA services? I’m sure I’m not the only EA out there who’s had to think about this. Even if your EA organization hasn’t, the next time you fill out your time card, think about what “service bucket” your efforts fall into. Do my categories make sense for what you do each week or month? If not, what’s missing. If it’s unclear which bucket something should go in, how would you redefine them? A consistent set of EA service definitions can definitely help all of us.

Build What Sells, Don’t Sell What You Build

I just read the Forrester document, “Inquiry Spotlight: Building An EA Practice, Q2 2009,” written by Gene Leganza and Katie Smillie, with Alex Cullen and Matt Czarnecki, and there’s one line that really stuck out for me.

When creating EA artifacts, you should focus on “building what sells” more than on “selling what you build.”

I think I should take that line and make a poster out of it. This consumer-first thinking is really, really important. I’ve seen too many things that were written for the convenience of the author rather than the consumer, and it never is as successful as it should be. This applies to EA artifacts, to user interfaces, to services, and just about anything else that’s supposed to be consumed by someone other than the person who wrote it. If you don’t make it easy to consume by the intended audience, they won’t consume it at the rate you desire. The trap that you can also fall into is to fail to recognize that you have more than one audience. If you assume a single, broad audience, then you wind up with a least common denominator approach that frequently provides too little to be useful to anyone. While some “100-level” communication is a good thing, it must be followed up with the “200-level” and “300-level” communication that is targeted at particular audiences and particular roles. For example, if you’re planning your communication around an enterprise SOA strategy, you may create some 100-level communication that is broad enough to wet the appetite of project managers, organizational managers, and developers, but not enough to tell any of them how it will impact them in detail. Follow that up with pointed conversations targeted specifically at the role of the project manager, the organizational manager, and the developer to get the messages across to them, and them only.

Finally, getting back to EA artifacts, consider not just the roles, but also the context in which the artifacts will be used. If the artifacts are used in project activities, then structuring them so the appropriate information is provided in the appropriate phase of the project is a good thing. Organizing the artifact to where people must hunt all over for the information they need at a particular point in time is not a good thing.

Once again, take Gene and Katie’s words to heart: Focus on building what sells more than selling what you build.

Factoring in Barriers to Entry

As part of understanding what business projects need to do to leverage BPM technology, I’ve been trying to eat my own dog food, so to speak. I’ve been looking at some of the EA processes and trying to model them using BPMN. These processes aren’t terribly complex, but at the same time, there is potential for technology to assist in their execution. They involve email distribution, task assignment, timer-based checks, notifications, etc., all of the same things that a process for a non-IT department may want to leverage too. The problem is that I look at these simple lightweight processes and think about the learning curve required to leverage the typical enterprise BPM suite, and that big barrier to entry is a large inhibitor. Even with using BPMN versus the built-in flowchart template, I can generate the process model in Visio very quickly.

A challenge that the BPM space faces right now is its barrier to entry. There are tools, most prominently SharePoint, that excel at having a low barrier to entry. When a team can quickly create a process model that can orchestrate and manage their work requests via an intranet site, that’s a big win. At the same time, does that low barrier to entry eventually become a boat anchor, either through infrastructure that poorly scales or a lack of more advanced features such as process analytics? On the flip side, does the business need another technology that requires significant development expertise and months-long (or more) projects to utilize? What’s the right path to take?

My opinion is that adoption is more important than sophistication, especially with the rate of technology change today, and the influence of consumer technologies on the enterprise. There is so much an individual user can do today, and human nature is to take the path of least resistance. This doesn’t necessarily mean that we should all pick the tool with the lowest barrier to entry, but it does mean that whatever tool you choose, you must get that barrier to entry to an appropriate point, especially if there are competing technologies that can be used. If your BPM technology requires that every process initiative have the equivalent of a senior developer involved, that could be a big problem if it’s something that the end users could do using Visio, Excel, or anything else. Find a way to lower the barrier.

Book now available via Safari Books Online

Thanks to Google alerts, I found out that my book, SOA Governance, is now available via Safari Books Online. You can access it here. If you enjoy it, consider voting for me as the Packt Author of the Year.

Maturity Levels

While working on a maturity model, a colleague pointed out a potential pitfall to me. The way I had defined the levels, they were too focused on standardized processes, which was not my intent. Indeed, many maturity efforts, such as CMMi, tend to be all about establishing standard processes across the enterprise. The problem with this is that just because you have standard processes doesn’t mean you’re actually getting the intended results from the capability. I’m sure this will ring true with my friend James McGovern who has frequently lambasted CMMi in his blog. So, to fix things, I propose the following maturity levels, and I’d like feedback from my readers.

  1. Not existent/Not applicable: The capability either does not exist at the organization, or is not needed.
  2. Ad hoc: The capability exists at the organization, however, the organization has no idea whether there is consistency in the way it is performed, whether within a team or across teams, there is no way to measure the costs and benefits associated with it, and there are no target costs and benefits associated with it.
  3. Measurable: The capability exists at the organization, and the organization is tracking the costs and benefits associated with it. There is no consistency in how the capability is performed either within teams or across teams, as shown by the measurements. The organization does not have any target costs and benefits associated with it.
  4. Defined: The capability exists at the organization, the organization is tracking the costs and benefits associated with it, and the organization has defined target costs and benefits associated with the capability. There is inconsistency, however, in achieving those costs and benefits. Note that different teams can have different target costs and benefits, if the organization believes that a single, enterprise standard is not in its best interest.
  5. Managed: The capability exists at the organization, the organization is tracking the costs and benefits associated with it, target costs and benefits have been defined, and the teams executing the capability are all achieving those target costs and benefits.
  6. Optimizing: The capability is fully managed and processes exist to continually monitor both the performance of the teams performing the capability as well as the target costs and benefits and make changes as needed, whether it is new targets, new operational models (e.g. switching from a centralized approach to a decentralized approach, relying on a service provider, etc.), new processes, or any other change for the better.

Maturity levels need to show continual improvement, and it can’t be solely about standardizing a process, since it may not need to be standardized across the enterprise, nor may those processes actually achieve the desired cost levels, even though they are standardized. Standardization is one way of getting there, and I’ve tried to make these descriptors be applicable for many paths of getting there. Let me know what you think.

Packt Author the Year Competition

The publisher of my book, Packt Publishing, has announced a competition for Author of the Year. You can find out more about the award here, as well as cast your vote. I’ll be perfectly transparent and state that there is a cash award associated with this, although I’d be posting this even if there wasn’t. I’m proud of the book that I wrote and if others have received value from it, that makes me even happier. If you feel so inclined to recommend me to my publisher, I’d be honored, but know that I’m already honored by the fact that you’ve either read or just considered reading my book. Packt is also giving away some prizes to random voters, so there may be something in it for you, too. Thanks for your consideration, and hopefully, your vote!

Black/White, Coding/Configuration, and other Shades of Gray

I’ve been going through the TOGAF 9 documentation, and in the Application Software section of the Technical Reference Model, there are two categories that are recognized, Business Applications and Infrastructure Applications. They define these two as follows:

Business applications … implement business processes for a particular enterprise or vertical industry. The internal structure of business applications relates closely to the specific application software configuration selected by an organization.
Infrastructure applications … provide general purpose business functionality, based on infrastructure services.

There’s a lot more to the descriptions than this, but what jumped out at me was the typical black and white breakdown of infrastructure and “not” infrastructure. Normally, it’s application and infrastructure, but since TOGAF uses the term infrastructure application, that obviously won’t work, but you get the point. What I’ve found at the organizations I’ve worked with is that there’s always a desire to draw a black and white line between the world of infrastructure and the application world. In reality, it’s not that easy to draw such a line, because it’s an ever-changing continuum. It’s far easier to see from the infrastructure side, where infrastructure used to mean physical devices, but now clearly involves software solutions ranging from application servers to, as TOGAF 9 correctly calls out in their description of infrastructure applications, commercial of the shelf products.

The biggest challenge in the whole infrastructure/application continuum is knowing when to shift your thinking from coding to configuration. As things become more commoditized and more like infrastructure, your thinking has to shift to that of configuration. If you continue with a coding and customization mentality, you’re likely investing significant resources into an area without much potential for payback. There are parallels between this thinking and the cloud computing and software as a service movements. You should use this thinking when making decisions on where to leverage these technologies and techniques. If you haven’t changed your thinking from coding to configuration, it’s unlikely that you’re going to be able to effectively evaluate SaaS or cloud providers. When things are offered as a service, your interactions with them are going to be a configuration activity based upon the interfaces exposed, and it’s very unlikely that any interface will have as much flexibility as a programming language. If you make good decisions on where things should be configured rather than coded, you’ll be in good shape.

Understanding Your Engagement Model

People who have worked with me know that I’m somewhat passionate about having a well-defined engagement model. More often than not, I think we’ve created challenges for ourselves due to poorly-defined engagement models. The engagement model normally consists of “Talk to Person A” or “Talk to Team B” which means that you’re going to get a different result every time. It also means that interaction is going to be different, because no one is going to come to Person A or Team B with the same set of information, so the engagement is likely to evolve over time. In some cases, this is fine. If you’re part of your engagement model is to provide mentoring in a particular domain, then you need to recognize that the structure of the engagement will likely be time-based rather than information-based, at least in terms of the cost. Think of it as the difference between a fixed-cost standard offering, and a variable cost (usually based on time) from a consulting firm. I frequently recommend that teams try to express their service offerings in this manner, especially when involved in the project estimation process. Define what services should be fixed cost and define what services are variable cost, and what that variance depends on. This should be part of the process for operationalizing a service, and someone should be reviewing the team’s effort to make sure they’ve thought about these concerns.

When thinking about your services in the ITSM sense, it’s good to create a well-defined interface, just as we do in the web service sense. Think about how other teams will interact with your services. In some cases, it may be an asynchronous interaction via artifacts. An EA team may produce reference models, patterns, etc. for other teams to use in their projects. These artifacts are designed in their own timeline, separate from any project, and projects can access them at will. Requests to update them based on new information go into a queue and are executed according to the priority of the EA manager. On the other hand, an architecture review is executed synchronously, with a project team making a request for a review, provided they have the required inputs (an architectural specification, in most cases), with the output being the recommendations of the reviewer, and possibly a formal approval to proceed forward (or not).

If you’re providing infrastructure services, such as new servers, or configuration of load balancers, etc., in addition to the project-based interactions, you must also think about what your services are at run-time. While most teams include troubleshooting services, sometimes the interface is lacking definition. In addition, run-time services need to go beyond troubleshooting. When the dashboard lights are all green, what services do you provide? Do you provide reports to the customers of your services? There’s a wealth of information to be learned by observing the behavior of the system when things are going well, and that information can lead to service improvements, whether yours or someone else’s. Think about this when you’re defining your service.

Congratulations Leo and the Canucks

Well, I have to hold true to my bet with Leo de Sousa. Congratulations to the Vancouver Canucks on sweeping my St. Louis Blues in the first round of the playoffs. They actually played very well, and I’d rather see them win than either Detroit or San Jose, so go Canucks. Now I can go back to paying attention to the St. Louis Cardinals (who came back from 4-0 down today to beat the Mets 6-4), and thankfully, Vancouver doesn’t have a major league baseball team.

images.jpeg

Thoughts on designing for change

I had a brief conversation with Nick Gall (Twitter: ironick) of Gartner on Twitter regarding designing for change. Back in the early days of SOA, I’m pretty sure that I first heard the phrase, “we need to build things to change” from a Gartner analyst, although I don’t recall which one. Since that time, there’s been a lot of discussion on the subject of designing/building for change, usually tied to a discussion on REST versus WS-*. Yesterday, I stepped back from the debate and thought, “Can we ever design for change, and is that really the right problem?”

As I told Nick, technology and design choices can certain constrain the flexibility that you have. Think about the office building that many of us work in. There was a time when they weren’t big farms of cubicle and they actually had real walls and doors. Did this design work? Yes. Was it flexible enough to meet the needs of an expanding work force? No. I couldn’t easily and quickly create new conference rooms, change the size of spaces, etc. Did it meet all possible changes the company would go through? No. Did the planners ever think that every cubicle would consume the amount of electricity they do today? What about wiring for the Internet? Sometimes those buildings need to be renovated or even bulldozed. The same thing is true on the technology side. We made some design decisions that worked and were flexibility, yet not flexible enough for the change that could not have been easily predicted in most companies, such as the advent of the internet.

Maybe I’m getting wiser as I go through more of these technology changes, but for me, the fundamental problem is not the technology selection. Yes, poor design and technology selection can be limiting, but I think the bigger problem is that we have poor processes for determining what changes are definitely coming, what changes might be coming, and how and when to incorporate those changes into what IT does, despite the available predictions from the various analysts. Instead, we have a reactive, project-driven approach without any sort of portfolio planning and management expertise. To this, I’m reminded of a thought I had while sitting in a Gartner talk on application and project portfolio management a year or two ago. If I’m sitting in a similar session on service portfolio management 5 years from now, we’ve missed the boat and we still don’t get it. Develop a process for change, and it well help you make good, timely design choices. The process for change involves sound portfolio management and rationalization processes.

SOA Governance Book Review

Fellow Twitterer Leo de Sousa posted a review of my book, SOA Governance, on his blog. Leo is an Enterprise Architect at the British Columbia Institute of Technology, and is leveraging the book on their journey in adopting SOA. Thanks for the review, Leo. I’m glad you posted it before the Stanley Cup playoffs begin as my St. Louis Blues will be taking on your Vancouver Canucks, and I wouldn’t have wanted the upcoming Blues victory to taint your review!

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.