Archive for the ‘Enterprise Architecture’ Category

Is Twitter the Cloud Bus?

ToddPoken.jpg

Courtesy of Michael Coté, I received a Poken in the mail as one of the lucky listeners to his IT Management and RIA Weekly podcasts. I had to explain to my oldest daughter (and my wife), what a Poken is, and how it’s utterly useless until I run into someone else in St. Louis who happens to have one or go to some conference where someone might have one. Oh well. My oldest daughter was also disappointed that I didn’t get the panda one when she saw it on the website. So, if you happen to own a Poken, and plan on being in St. Louis anytime soon, or if you’re going to be attending a conference that I will be at (sorry, nothing planned in the near future), send me a tweet and we can actually test out this Poken thing.

Speaking of the RIA Weekly podcast, thanks to Ryan Stewart and Coté for the shout-out in episode #46 about my post on RIAs and Portals that was inspired by a past RIA Weekly podcast. More important than the shout-out, however, was the discussion they had with Jeff Haynie of Appcelerator. The three of them got into a conversation about the role of SOA on the desktop, which was very interesting. It was nice to hear someone thinking about things like inter-application communication on the desktop, since the integration has been so focused on the server side for many years. What really got me thinking was Coté’s comment that you can’t build an RIA these days without including a Twitter client inside of it. At first, I was thinking about the need for a standard way for inter-application communication in the RIA world. Way back when, Microsoft and Apple were duking it out over competing ways of getting desktop apps to communicate with each other (remember OpenDoc and OLE?). Now that the pendulum is swinging back toward the world of rich UI’s, it won’t surprise me at all if the conversation around inter-application communication for desktop apps comes up again. What’s needed? Just a simple message bus to create a communication pathway.

In reality, it’s actually several message buses. An application can leverage an internal bus for communication with its own components, a desktop/VM-based bus for communication with other apps on the same host, another bus for communication within a local networking domain, and then possibly a bus in the clouds for communication across domains. Combining this with Coté’s comment made me think, “Why not Twitter?” As Coté suggested, many applications are embedding Twitter clients in them. The direct messaging capability allows point-to-point communication, and the public tweets can act as a general pub-sub event bus. In fact, this is already occurring today. Today, Andrew McAfee tweeted about productivity tools on the iPhone (and elsewhere), and a suggestion was made about Remember The Milk, a web-based GTD program with an iPhone client, and a very open integration model which includes the ability to listen to tweets on Twitter that allow you to add new tasks. There’s a lightweight protocol to follow within the tweet, but for basic stuff, it’s as simple as “d rtm buy tickets in 2 days”. Therefore, if someone is using RTM for task management, some other system can send a tweet to RTM to assign a talk to a Twitter user. The friend/follower structure of Twitter provides a rudimentary security model, but all-in-all, it seems to work with a very low barrier to entry. That’s just cool. Based on this example, I think it’s entirely possible that we’ll start seeing cloud-based applications that rely on Twitter as the messaging bus for communication.

SOA Governance RefCard Now Available

I’m happy to announce I’ve now published a RefCard (reference card) on SOA Governance based on the content in my book from Packt Publishing. If you want to get a taste of what the book has to offer, follow this link over to DZone.com to download it for free.

Don’t Go On an IT Diet, Change Your Behavior

I’ve refrained from incorporating the current economic crisis into my posts… until now. In a recent discussion, I compared the current situation to what many, many people do every new year. They make a resolution to lose weight, go on some fad diet or start going to the fitness center, maybe lose that weight, but then go right back to how their behavior was a few months prior and gain that weight (and potentially more) right back.

Enterprises are in a similar state. Priorities have shifted to where cost containment and cutting are at the top of the list. While the knee-jerk reaction is to stop investing in any long-term initiatives, this could be a risky approach. If I don’t eat for 4 days, I may quickly drop the weight I need to, but guess what? I still need to eat. Not eating for 4 days will only make me more unhealthy, and then when I do eat, the weight will come right back.

These times should not mean that organization drop their efforts to adopt SOA, ITIL/ITSM, or any other long-term initiative. Most of these efforts try to achieve ROI through cost reduction by eliminating redundancy in the enterprise, which is exactly what is needed today! The risk, however, is that these efforts must be held accountable for the goals they claim to achieve. They must also be prepared to adjust their actions to speed up the pace, if it is possible. No one could have predicted the staggering losses we’re seeing, and sometimes it is necessary for a company’s survival to adjust the pace. If these efforts are succeeding in reducing costs, however, we shouldn’t kill them just because they take a longer time to achieve their goals, otherwise we’ll find ourselves back in the same boat when the next change in priorities or goals happen.

The whole point of Enterprise Architecture, SOA, and many of these other strategic IT initiatives is to allow IT to be more agile- to respond more quickly to changes in the business objectives. Guess what? We’re in the middle of a big unprecedented change in our lifetime. My guess is that the best survivors of this meltdown will be organizations that don’t go on a starvation diet, but instead simply recognize that their priorities and goals have changed and execute without significant disruption to the way they utilize IT. If your EA team, SOA efforts, ITIL efforts, or anything else are inefficient and not providing the intended value, then you’re at risk of being cut, but you were probably at risk anyway, now someone just happens to be looking for targets. If EA has been adding value all along, then you’ll probably be a strategic asset that will help your organization weather the storm.

Most Read Posts for 2008

According to Google Analytics, here are the top read posts from my blog for 2008. This obviously doesn’t account for people who read exclusively through the RSS feed, but it’s interesting to know what posts people have stumbled upon via Google search, etc.

10. Governance Does Not Imply Command and Control. This was posted in August of 2008, and intended to change the negative opinion many people have about the term “governance.”

9. To ESB or not to ESB. This was posted in July of 2007, and gave a listing of five different types of ESBs that exist today and how they may (or may not) fit into your environment.

8. Getting Started with SOA Governance. This was posted in September of 2008, just before my book was released. It emphasizes a policy first approach, stressing education over enforcement.

7. Dish DVR Upgrade. This was posted in November of 2007 and had little to do with SOA. It tells the story of how Dish Network pushed out an upgrade to the software on their DVRs that wiped out all of my existing timers, and I missed recording some shows as a result. The lesson for IT: even if you think there’s no chance that a change will impact someone, you still should make them aware that a change is occurring.

6. Most popular posts to date. This is rather humorous. This post from July of 2007 was much like this one. A list of posts that Google Analytics had shown as most viewed since January of 2006. Maybe this one will show up next year. It at least means someone enjoys these summary posts.

5. Dilbert’s Guide to Governance. In this post from June of 2007, I offered some commentary on governance in the context of a Dilbert cartoon that was published around the same timeframe.

4. Service Taxonomy. Based upon an analysis of search keywords people use that result in them visiting my pages, I’m not surprised to see this one here. This was posted in December of 2006, and while it doesn’t provide a taxonomy, it provides two reasons for having taxonomies: determining service ownership and choosing the technical implementation platform. I don’t think you should have taxonomies just to have taxonomies. If the classification isn’t serving a purpose, it’s just clutter.

3. Horizontal and Vertical Thinking. This was posted in May of 2007 and is still one of my favorite posts. I think it really captures the change in thinking that is required for more strategic solutions, however, I also now realize that the challenge is in determining when horizontal thinking is needed and when it is not. It’s not an easy question and requires a broad understanding of the business to answer correctly.

2. SOA Governance Book. This was posted in September of 2008 and is when I announced that I had been working on a book. Originally, this had a link to the pre-order page from the publisher, later updated to include direct links there and to the page on Amazon. You can also get it from Amazon UK, Barnes and Noble, and other online bookstores.

1. ITIL and SOA. Seeing this post come in at number one was a surprise to me. I’m glad to see it up there, however, as it is something I’m currently involved with, and also an area in need of better information. There are so many parallels between these two efforts, and it’s important to eliminate the barriers between the developer/architecture world of SOA and the infrastructure/operations world of ITIL/ITSM. Look for more posts on this subject in 2009.

Thank you!

I just happened to check my FeedBurner statistics and see that as of the first business day of 2009, I had over 1,000 subscribers to this blog for the first time. With a nice push of new subscribers from serverside.com due to Jack van Hoof’s review of my book that was posted there, I’m now over 1,100. While my posting frequency has slowed a bit, I hope to continue to provide useful information to all of you. As a corporate practitioner, I always enjoy hearing what peers are doing, so if there’s something you’d like me to talk about that may be relevant to your work, drop me an email or direct message on Twitter, and if it’s something I’ve thought about or worked on, I’ll do my best to share what I can. Again, thanks for your readership.

Defining the Technical Service Record

Here’s a topic for which I’d really like some community input, and I think it’s something that many of my readers have probably had to do, are doing, or would be interested in the result. If you’re adopting SOA, you’re likely using a Service Registry/Repository of one form or another. It can range from a set of scribbled notes on a whiteboard or post-its in some architect’s office/cube, to Excel, to one of the many vendor products available for this purpose. So, assuming you are actually using one of these mechanisms, what are you recording about your services, the consumers of those services, and how/where are you capturing the relationship between the two? In this post, I’m going to start with the first question, the answer to which constitutes what I call the technical service record. Please note that the focus of this is on services that have a programmatic interface, and not the broader business service or ITIL service space, although I am very interested in the overlap between this record and the service record that would existing in an ITIL v3 Service Portfolio.

Here’s a list of items that could be recorded about a service to get the discussion started. For each item, I’ve provided a description of what that item is, whether it is optional or not, the visibility of that item (public, consumers only, service manager only, etc…). Please contribute your thoughts on other attributes that could/should be captured along with its optionality (is that a word?) and visibility.

Attribute Description Required Visibility
Name Human-readable name of the service Yes Public
Description Human-readable description of what the service does Yes Public
Owner/Manager The person accountable (in the RACI sense) for the service. At a minimum, this is the person to contact in order to begin using the service. Yes Public
Question: Should the owner be public, or only visible to registered consumers? A registry/repository could facilitate interaction with a potential consumer without publicly revealing the owner’s name.
Interface Type (or should it be types?) The technical interface type, such as SOAP, REST, POX/HTTP, etc. Yes Public
Internal/External Is the service exposed internally, externally, or both? Yes Public
Note: External users can only see services exposed externally.
Service Type Taxonomy classification for purposes of mapping to technology platform Yes Internal Only
Production WSDL URL URL for the production WSDL (Required for Web Services) No * Consumers
Deployment Platform On which logical platform is the service hosted? No * Internal Only
Deployment Location What is the physical location(s) of the service? Preferably, this should be a link into the CMDB. No * Internal Only
Test Plan/Scripts A link to a test plan or specific test scripts for the service as provided by the provider. No * Internal Only
Performance Profile The expected resource utilization of the service. No * Internal Only
Development Cost The cost incurred in creating the service. No * Internal Only
Estimated Integration Cost Expected cost for consumers to integrate service usage. No * Internal Only
Current ROI Current development ROI generated based upon development cost, cost to integrate, and current number of consumers No * Internal Only
Status Status of the service: Planned, in development, in production, decommissioned) Yes See below
The visibility of this is directly tied to the state. For internal services, status is open to the public. For external services, a service should only be visible if it is in production.
Version The version of the service associated with this record. Yes Public
Created Date The date this record was created. Yes Internal Only
Modified Date The date this record was last modified. Yes Internal Only

Of course, now that I attempted to put this list down with some simple attributes, I’ve realized that whether or not things are required or visible to particular parties are dependent on the status of the service, whether it is exposed externally or not, the interface type, etc. It’s just hard to make that fit into an HTML table and still have this entry be readable. Anyway, if there isn’t anything proprietary or confidential about the structure of your service records, consider sharing it here. I promise to publish the end result of this effort here for all to share for free. This isn’t limited to Web Services, either. If you’re using REST, what information would you provide about the collection of resources that comprise the service to potential users of those services? I would guess that many of the above attributes would still apply, and could certainly be accessed themselves through a REST interface, since a serivce record is a resource in and of itself.

Thanks for your participation! If you’d prefer to send me your information directly without publicly posting it here, send me an email at todd at biske dot com or you can send me a direct message on twitter at toddbiske.

RIAs and Portals

In a RIA Weekly podcast, Michael Coté and Ryan Stewart had a brief conversation on the role of RIAs in portals. They didn’t go into much details on it, but it was enough to get me noodling on the subject.

In the past, I’ve commented on the role of widgets/gadgets ala Apple’s Dashboard and Vista’s Sidebar and how I felt there was some significant potential there. To date, I haven’t seen any “killer app” on the Mac side (I have no idea about Vista given that I don’t use it at home or at work). One thing that I found curious, however, was that when I went looking for a decent Twitter client for the Mac, there was no shortage of dashboard widgets, but actually very few desktop apps. I wound up choosing Twirl initially, and am now using TweetDeck. Both of these are Adobe AIR applications.

So what does this have to do with portals? Well, my own view is that your desktop is a portal. A portal should contain easy access to all of things you need to do to do your job. The problem with desktops today, however, is that the typical application is so bloated, that the startup/quit process is very unproductive, and if you leave them open all the time, you need dual monitors (or a really big monitor) and a boatload of memory (even though most isn’t getting used). For this reason, I still really like the idea of these small, single-purpose widgets that do one thing really well. The problem with it right now, however, is that Dashboard and Sidebar fall into the out-of-sight/out-of-mind category. I want my Twitter client in a visible portion of my desktop at all times, or at least with the ability to post a visual notification somewhere. If I leverage a Dashboard widget, it’s invisible to me unless I hit a function key. It’s out-of-band by intent. There are things that belong there. That being said, the organizational features of Dashboard could easily be applied to the desktop, as well. If I had a bunch of lightweight widgets that I used to do the bulk of my work always available on my desktop, that would be great. It had better perform better than the current set of applications that I have set to start at login, however.

Where does RIA fit in? I don’t know that I’d need portability from my desktop in a browser-based portal environment. I’m sure there a people out there that do everything they need to do on a daily basis via Firefox and a whole bunch of plugins. I’ve never tried it, nor do I have any interest in doing so, but for people in that camp, common technology between a desktop portal and a browser-based portal could be a good thing for them. For me, my primary interest is simply getting a set of lightweight tools for 80% of my day-to-day tasks that aren’t so bloated with stuff I don’t need. I thought a bit about portability of my desktop environment across machines (i.e. the same TweetDeck columns at work and at home), but I think that’s more dependent on these widgets storing data in the cloud than it is on storing the definition of my desktop in the cloud somewhere, but that might be of interest, as well.

The gist of all of this is that I do believe there are big opportunities out there to make our interaction with our information systems more efficient. Can RIAs play a role? Absolutely, but only if we focus on keeping them very lightweight, and very usable.

Conferences for Enterprise Architects

Brenda Michelson asked the blogosphere, “What does a ‘would & could attend’ IT conference look like?” In her post, she suggested some items that are ones that are required for establishing initial interest (i.e. things that make us say, “I would like to attend that), including credible speakers, compelling topics, peer interaction, immersive experience, participatory programs, etc. She then called out some constraints that come into play when answering whether or not we could attend. Those constraints include cost, proximity, dates, etc. The premise is that the finding the right intersection of attributes creates the “would & could attend.”

First, let me describe why I attend conferences. I don’t normally use conferences to learn about new areas. Instead, I go to conferences to extend my knowledge in an areas. Sometimes it may be an effort to go from “100-level” knowledge to “200-level” and sometimes it may be in areas where I know a lot, and I’m just hoping to find some nugget through sharing experiences. Given that, the conference sessions that interest me the most are almost always ones that involve a panel of practitioners. By practitioners, I mean corporate IT employees and not consultants, analysts, or vendors. This doesn’t mean that I don’t think that consultants, analysts, and vendors have anything good to contribute, it just means that their presentations have less potential value for me. While any speaker should view the effort as a marketing opportuntity, it obviously has more of an impact on the bottom line for consultants, analysts, and vendors. A practitioner must understand that their speaking does have an impact on recruiting efforts for their employer, however, it’s typically not a primary concern and unlikely that anyone is tracking the number of recruiting leads that came out of the speaking engagement. The practitioner is there to share best practices and hopefully engage in conversations with peers about their efforts in the same space. Unfortunately, these are frequently few and far between.

Other factors that come into play on the “would” portion are the agenda. I’ve never attended an “un-conference,” and I think this would be a bit more difficult to pull off in the EA space than it would be in the general development space. I’m not against the concept, but I think you need to have a very strong base of people committed to ensuring that conversations on interesting topics will happen. My experience with items in the middle, like birds-of-a-feather sessions are similar. Unless there’s someone in the discussion committed to keeping the conversation going, the sessions are duds. At the same time, there’s a risk that such a person becomes the sole presenter. A facilitator that ensures discussion, rather than presentation, happens is critical. I’d err on the side of having defined topics, pre-planned questions, but then structuring the sessions in a way to allow lots of time for interaction. Here, the moderator/facilitator is key. If the audience isn’t willing to participate, the facilitator must fill the time with relevant questions. This is a big risk, because for every 1 person I find that is willing to share experiences, there are probably 10 or 20 who are only interested in receiving, whether due to their own personality, level of knowledge, restrictive information sharing policies of their employer, or one of many other reasons.

The other challenge with all of this is that someone needs to pay for all of this. Practitioners don’t have a marketing budget to fund IT conferences like a vendor, consultant, or analyst firm might. As a result, I think you’re more likely to find these type of conversations through local user groups, however, the issue I have with those is that they always occur during evenings, time which I spend with my family. I’d rather be doing this during my work hours, as these conferences are work-related. Addditionally, unless you work in a very big city, there may not be enough participants to sustain the discussion. I live and work in the St. Louis metro area, and there are still many large organizations here that don’t have an EA practice, so sustaining something at a local level would be difficult. Therefore, I’m willing to sacrifice some portion of the conference time to allow vendor, analyst, or consultant presentations that would offset the costs to me. That being said, I’d like to see at least 50% of the sessions be from practitioners, and I’d be willing to give up frills (meals, conference schwag, evening entertainment, etc.) to keep that balance.

As for other factors, location, dates, costs, etc. all of them have been less of a decision factor for me. Obviously, in today’s economy, the cheaper the better, and it’s always nice when I can consider bringing my family with me and let them be entertained by the area while I go learn things, but it usually all comes down to whether or not I’m going to learn something and have some facilitated interaction with my peers. By the way, I also think that so-called “networking sessions” where they group people at a meal according to their industry vertical or some other attribute don’t cut it. While, you may have a good conversation about the weather at the conference site or current events, and may meet some nice people, they’re unlikely to result in information sharing relevant to the conference topic unless someone steps in as a facilitator.

Note: I just read James McGovern’s response to Brenda’s post, and I like his idea of a “Hot Seat” question. I would have no problem being asked questions without knowing the questions in advance, with the appropriate restrictions on discussing intellectual property and keeping questions on the topic at hand.

Jack van Hoof Reviews my SOA Governance Book

Jack van Hoof posted a review of my SOA Governance book on his SOA and EDA blog. In it, he states:

Reading this book felt like taking a hot shower. As professional architects, we all understand what Todd has written (or don’t we?). But owning one handy book of hardly 200 pages with all those thoughts structured and combined at an appropriate level of understanding feels like possessing a jewel.

Thanks for the review, Jack. You can read his full review here.

Finding Value in BPM/Workflow Technology

Some recent conversations about the use of workflow and orchestration technologies got me thinking about how to properly look for value when trying to apply these technologies, whether associated with a BPM suite, or with any of the other multitude of tools out there that claim to have orchestration/automation/workflow/work management capabilities.

The one common term that always comes up is process. All of these tools always wind up having some sort of process definition be a requirement. There is one big factor, however, that has a significant impact on where you should look for value, and that’s whether those processes involve manual (i.e. done by a person) activities or not.

Let’s handle the simpler of the two cases, first, which is where there is no manual activities whatsoever. In this case, what we’re really talking about is process automation. If there are no manual steps, then there is no reason that the entire process can’t be fully automated. If we fully automate a process, what are the factors in the value equation? Clearly, if the process isn’t fully automated today, there is a one-time benefit in efficiency. The execution time should move from a variable, potentially unpredictable value, to a consistent, predictable value. This is the case regardless of what tools we use to automate it. Theoretically, I could automate the process with scripts or a programming language and achieve the same value. If you agree with me, then the real value contribution in applying BPM/Workflow technologies lies not in the run-time space, but in the development time space. By either reducing inefficiencies in the communication between analysts and developers through a common language (a process model), or by improving productivity in the development time through the drag-and-drop visual environments of most tools, value can be obtained through time-to-delivery. Beyond this, there is probably not as much value to be obtained through the “management” portion of the BPM suite. Even if the process is subject to frequent change, the area of interest is the time to deliver the change, not optimization of the process itself, since by fully automating the process, we should assume it’s also fully optimized.

If we throw manual tasks into the equation, then we have a different story. While the development time efficiencies certainly still apply, there’s now significant value that can be obtained through process analysis and optimization. I need to know how long those manual tasks take, why Judy accomplishes more tasks than John, what chaos ensues when Fred calls in sick, what the impact of task assignment and escalations are, etc. This information can be obtained by managing the processes, through instrumentation, analytics, and reporting. By doing so, we can get into a cycle of continuous improvement, and strive to optimize the manual efforts that can’t be automated.

Now the reason I bring this up is that there are no shortage of tools that claim to have workflow/business process capabilities. If you have a BPM suite, now you’re faced with the question of which workflow tool to use. What you need to think deeply about is where you’re going to get your value. Products with workflow capabilities may have advantages in development time value because they will come pre-populated with actions/tasks appropriate to the context of that tool, while a generalized BPM platform may not. The flipside, however, is that those same tools with workflow capabilities may only provide a piece of the BPM suite, namely, business process development. If what you really need is business process management, with the ability to monitor, analyze, and optimize the manual parts of your processes, then you may need to sacrifice some development time efficiencies to get the more important run-time value.

Finally, keep in mind that not all work can be defined by a process. As Keith Harrison-Broninski talks about in his book, Human Interactions: The Heart And Soul Of Business Process Management: How People Reallly Work And How They Can Be Helped To Work Better, there will always be ad hoc work. You’ll still need to consider how to best utilize technology to support those ad hoc activities, rather than trying to define a rigid process for something that isn’t.

More on review boards…

In response to my post on the “Effective Governance” talk given at the Gartner EA Summit, Ron Rosenhead said:

For me there are a couple of overlapping issues:
Do project boards actually know what they are established for? Plus, how well trained are members of project boards? I have to say that my experience here in the UK is that Boards are established sometimes with (overly) large numbers, give little guidance and are not well trained in understanding what they are to do and in project management. They usually receive the thumbs down from project managers who say they add little or no value.
Yes, they should set the parameters of decision making and enable others to make decisions. If I was to ask everyone who came through courses we ran in 2008 very few would say that this had actually happened.

His first question is really a great point. All too often, these boards are created without sufficient direction to be effective. If I were on one of these boards, even though it might be boring, I’d really want to just be able to rubber stamp as many of the projects as possible. That can only happen if the board effectively sets expectations in advance so the project teams know what they’re in for. If the project team is forced to guess as to what the board will want, it’s far more likely that they’ll guess incorrectly. At the same time, if the expectations are set, it’s also important for the review board to move through it as quickly as possible. If the team has done their homework, provided the information necessary, don’t waste the project team’s time by walking through the answers for an hour knowing full well that they’ve complied with the policies. This is why I like having explicit policies and think that the use of self asssessments via scorecards can be a very powerful tool.

When is Redundancy Okay?

A common theme that comes up in architecture discussions is the elimination of redundancy. Simply stated, it’s about finding systems that are doing the same thing and getting rid of all of them except one. While it’s easily argued that there are cost savings just waiting to be realized, does this mean that organizations should always strive to eliminate all redundancy from their technology architectures? I think such a principle is too restrictive. If you agree, then what should the principle be?

The principle that I have used is that if I’m going to have two or more solutions that appear to provide the same set of capabilities, then I must have clear and unambiguous policies on when to use each of those solutions. Those policies should be objective, not subjective. So, a policy that says “Use Windows Server and .NET if your developer’s preferred language is C#, and use if your developer’s preferred language is Java” deosn’t cut it. A policy that says, “Use C# for the presentation layer of desktop (non-browser) applications, use Java for server-hosted business-tier services” is fine. The development of these policies is seldom cut and dry, however. Two factors that must be considered are the operational model/organizational structure and the development-time values/costs involved.

On the operational model/organizational structure side of things, there may be a temptation to align technology choices with the organizational structure. While this may work for development, frequently, the engineering and operations team are centralized, supporting all of the different development organizations. If each development group is free to choose their own technology, this adds cost to the engineering and operations team, as they need expertise in all of the platforms involved. If the engineering and operations functions are not centralized, then basing technology decisions the org chart may not be as problematic. If you do this, however, keep in mind that organizations change. An internal re-organization or a broader merger/acquisition could completely change the foundation on which policies were defined.

On the development side of things, the common examples where this comes into play are environments that involve Microsoft or SAP. Both of these solutions, while certainly capable of operating in a heterogeneous environment, provide significant value when you stay within their environments. In the consumer space, Apple fits into this category as well. Their model works best when it’s all Apple/Microsoft/SAP from top-to-bottom. There’s certainly other examples, these are just ones that people will associate with this more strongly than others. Using SAP as an example, they provide both middleware (NetWeaver) and applications that leverage that middleware. Is it possible to have SAP applications run on non-SAP middleware? Certainly. Is there significant value-add if you use SAP’s middleware? Yes, it’s very likely. If your entire infrastructure is SAP, there’s no decisions to be made. If not, now you have to decide whether you want both SAP middleware and your other middleware, or not. Likewise, if you’ve gone through a merger, and have both Microsoft middleware and Java middleware, you’re faced with the same decision. The SAP scenario is bit more complicated because of the applications piece. If we were only talking about custom development, the more likely choice is to go all Java, all C#, or all -insert your language of choice-, along with the appropriate middleware. Any argument about value-add of one over the other is effectively a wash. When we’re dealing with out-of-the-box applications, it’s a different scenario. If I deploy a SAP application that will automatically leverage SAP middleware, that needs to be compared against deploying the SAP application and then manually configuring the non-SAP middleware. In effect, I create additional work by not using the SAP middleware, which now chips away at the cost reductions I may have gained by only going with a single source of middleware.

So, the gist of this post is that a broad principle that says, “Eliminate all redundancy” may not be well thought out. Rather, strive to reduce redundancy where it makes sense, and where it doesn’t, make sure that you have clear and unambiguous policies that tells project teams how to choose among the options. Make sure you consider all use cases, such as where the solution may span domains. Your policies may say “use X if in domain X, use Y if in domain Y,” but you also need to give direction on how to use X and Y when the solution requires communication across domains X and Y. If you don’t, projects will either choose what they want (subjective, bad), or come back to you for direction anyway.

Gartner EA Summit: Managing the Migration to Your Future State Architecture

Presenter: Scott Bittler, Gartner

Another presentation from Scott, this time over breakfast. The bulk of this talk was focused on the importance of what he termed as “Next State Architecture.” If we have the future state and current state architectures documented, the challenge that exists is if we can’t achieve the future state architecture in one step. If that’s the case, then there’s a gap in the prescriptive guidance needed for project teams. If they know they can’t get to the future state, and don’t have guidance on how they should move from current state, they’re likely to stick with what they know. Good advice.

There were some specific nuggets outside of this core topic that I also wanted to call out. First, he said that the most important EA deliverable is principles, because it’s those principles that lead to consistent decision making. The talk wasn’t focused on this, so he didn’t go into depth, but some examples of these principles would be good. I definitely see the importance in these and agree with his statement. I’ve been in many situations with two (or more) compelling options where we seem to be at a stalemate. The principles need to assist in getting decisions made.

Second, I liked the fact that he said that EA’s role is to provide prescriptive guidance so that appropriate choices are made on projects and programs. This emphasizes the point that I was hoping would be made in his governance talk yesterday. Provide the policies, and anyone can make the right decisions.

Finally, the last comment he made was that with the advent of EA-focused web sites, etc., any team that claims ignorance when confronted with non-compliance (“I didn’t know I was supposed to do that”) is unacceptable in this day. Here, I disagree. I make extensive use of RSS feeds in my work so that I get information pushed to me, but I know many of my colleagues do not. A web site is still a pull-model, and there’s very few people that I know of that have the discipline to regularly check common web sites. EA has to be accountable for the communication effort and ensuring that it gets pushed out to the people who need it. Putting it on a web site isn’t enough. So, this one I disagree with. I think if EA is serious about achieving compliance, then they should be serious about pushing the information out. Create a formal communication plan and execute it.

Gartner EA Summit: Case Study from Health Care Service Corporation

In this session, Bernadette Rasmussen, Chief Enterprise Architect at Health Care Service Corporation, gave a case study discussing their efforts to establish a future-state architecture. The highlight of this session for me was the fact that a deliverable of their future state architecture was a formal communication plan, and then the actual communication activities articulated in that plan. This included large presentations for lots of people, DVDs containing an overview, development of on-line training, formal communication to senior IT leadership (who in turn had them communicate it senior leadership outside of IT), and more. I’ve had the opportunity to work on one enterprise-level effort with someone who was passionate about communication and had us develop a similar plan, and I think it was a huge contributor to the success of the effort. Developing the artifacts is one thing, but if people don’t know they exist, they won’t get used.

Governance and Iterative Development

Chuck Allen, in this blog entry posted after he read my book, felt that the book was missing a discussion on the role of iteration and test-driven development in building a canonical model. He felt that my description of the role of a canonical model felt like a waterfall methodology. I had posted a comment on his blog, but it hasn’t shown up there, so I’d thought I’d post a response here.

There’s two things that came to my mind as a result of Chuck’s post. First, Chuck’s viewpoint is consistent with a lot of people’s thinking around governance as some big, heavyweight process that has more in common with BUD (big up-front design) practices. When applied to agile methodologies and iterative development, they feel it won’t work. This is not my view on it, however. My view is that governance is a requirement, regardless of your methodology. If your project teams feel it’s getting in the way, it’s not that you need to get rid of governance, it’s that you need to change your approach. Where teams get frustrated is where they’re forced to go before some review board or reviewer who starts asking them, “Did you do this? Did you do that?” and the answer is always, “No, I didn’t know I needed to do that.” Therein lies the rub. The team didn’t know about the policies that existed. If the policies aren’t documented, how can we expect projects to be compliant? If the policies are documented, then there should be no reason why a technical lead or project architect can’t bring them up as appropriate within an iterative approach, or bring them up as part of some up-front design, if that’s your preferred approach.

The second thing that came to mind is more about developing those policies and that reference material. If we’re adopting SOA at an enterprise level, then there will need to be policies that define what that “enterprise” success is. My book calls out what those reference materials are, because those are what’s important to good governance. The book did not, however, go into depth on how some of those artifacts would get created. It doesn’t describe how to develop a canonical model or a business capability map, rather, it describes how those artifacts should be used to achieve SOA success. That is the governance question. Developing a business capability map is a business analysis and architecture question. Developing a canonical model is an information architecture question. There are books out there that can teach you how to do that. To Chuck’s point, however, when these artifacts are intended to define something at an “enterprise” level, there is significant risk that they never get created because we go into analysis paralysis. I did call this out in my book, as Chuck pointed out, but I think he offers some good advice that it may make sense to not only apply iterative approaches to your software development effort, but also to your efforts to produce policies and reference material. That’s embodied in my four processes of governance, where the last process is one of continuous improvement. Establish some policies, communicate and educate, enforce them, measure the impact, and then adjust as needed.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.