OTN Podcast on EA Communication

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

I participated on a panel discussion on communication and enterprise architecture, hosted by Bob Rhubart of Oracle. Part one is now posted on Oracle’s Technology Network, with parts 2 and 3 to follow soon.

Thoughts on the iPad

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

I couldn’t resist sharing my thoughts about the iPad along with every other technology pundit out there. I’m very intrigued by the possibilities of the iPad. From what was announced on stage, there’s nothing that immediately jumps out to say, “Wow, this is going to change world.” The reason for this, however, is because that ship already set sail with the iPhone/iPod touch. I recently read an article about Apple’s approach to user interface technology, and how the touchscreen display was really the game changer. Why be burdened by a full keyboard if you don’t need it? Instead, allow the interface to be fully customizable to the task at hand. The iPhone/iPod touch did this. The iPad is a recognition that the small form factor of the iPhone is simply not suitable for all applications. If the handheld form factor is class one, then something around the size of a sheet of paper is form factor two. Anything bigger than that starts to make more sense in a desktop setting, rather than being primarily portable.

Given this angle, I think the iPad positioning makes a lot of sense. Frankly, it’s surprising we haven’t reached this point sooner. Way back in the early nineties, I took a seminar course in graduate school on human computer interaction, and four or five students had to engage in panel discussion on where interfaces would go five or ten years from then. I don’t think a single one of them expected the keyboard and mouse to still be the dominant UI technology almost twenty years later, but that is the case. It’s time to recognize that while well suited for some activities, that interface is also a boat anchor for other activities. A platform like the iPad now opens things up to more customized interfaces that may be much more efficient and intuitive for tasks. Ironically, I think it’s this same thinking that pushed Apple away from the original web-based focus of the original iPhone and into the world of custom apps. While a developer may be able to reach a wider audience with a browser based application, that’s at the sacrifice of the UI, despite AJAX, Flash, HTML 5. This is also why I don’t see Flash support as a big deal. Yes, it prevents us from watching browser based video, but I’d much rather have a specialized app with a more intuitive interface for doing so. Why be burdened by the web browser if you don’t have to?

So, is the iPad revolutionary? No, I think it’s evolutionary from the iPhone/iPod touch. I’m very interested to see what applications can be developed for this form factor for the educational and medical markets. On a laptop, it’s likely that those advanced applications would have required a desk, because there’s just no way to hold the laptop, with it’s keyboard, and try to manipulate the track pad, pointer nub (whatever it’s called), or a mouse to achieve the interaction needed. With the iPad, it’s cradled in one arm, with your other hand free. You can have advanced interactions. This is where it will show its stuff. Just as the iPhone didn’t have too much to show with v1, but the partner apps became much more sophisticated with the 3G and the 3GS, the same will be true with the iPad. Two to three weeks didn’t allow partners to show much at the kickoff, but I think we’ll all look back a year from now and see some revolutionary apps that have been freed from the burden of the keyboard and mouse interface.

Governance Technology and Portfolio Management

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

David Linthicum continued the conversation around design-time governance in cloud computing over at his InfoWorld blog. In it, he quoted my previous post, even though he chose to continue to use the design-time moniker. At least he quoted the paragraph where I state that I don’t like that term. He went on to state that I was “arguing for the notion of policy design,” which was certainly part of what I had to say, but definitely not the whole message. Finally, Dave made this statement:

The core issue that I have is with the real value of the technology, which just does not seem to be there. The fact is, you don’t need design-time service governance technology to define and define service policies.

Let’s first discuss the policy design comment. Dave is correct that I’m an advocate for policy-based service interactions. A service contract should be a collection of policies, most if not all of which will be focused on run-time interactions and can be enforced by run-time infrastructure. Taking a step backward, though, policy design is really a misnomer. I don’t think anyone really “designs” policies, they define them. Furthermore, the bulk of the definition that is required is probably just tweaking of the parameters in a template.

Now, moving to Dave’s second comment, he made it very clear that he was talking about governance technology, not the actual governance processes. Speaking from a technology perspective, I’ll agree that for policy management, which includes policy definition, all of the work is done through the management console of the run-time enforcement infrastructure. There are challenges with separation of concerns, since many tools are designed with a single administration team in mind (e.g. can your security people adjust security policies across services while your operations staff adjust resources consumption while your development team handles versioning, all without having the ability to step on each other’s toes or do things they’re not allowed to do?). Despite this, however, the tooling is very adequate for the vast majority (certainly better than 80-90% in my opinion) of enterprise use cases.

The final comment from me on this subject, however, gets back to my original post. Your SOA governance effort involves more than policy management and run-time interactions. Outside of run-time, the governance efforts has the closest ties to portfolio management efforts. How are you making your decisions on what to build and what to buy, whether provided as SaaS or in house? Certainly there is still a play for technology that support these efforts. The challenge, however, is that processes that support portfolio management activities vary widely from organization, so beyond a repository with a 80% complete schema for the service domain, there’s a lot of risk in trying to create tools to support it and be successful. How many companies actually practice systemic portfolio management versus “fire-drill” portfolio management, where a “portfolio” is produced on a once-a-year (or some other interval) basis in response to some event, and then ignored for the rest of the time, only to be rebuilt when the next drill occurs. Until these processes are more systemic, governance tools are going to continue to be add-ons to other more mature suites. SOA technologies tried to tie things to the run-time world. EA tools, on the other hand, are certainly moving beyond EA, and into the world of “ERP for IT” for lack of a better term. These tools won’t take over all corporate IT departments in the next 5 years, but I do think we’ll see increased utilization as IT continues its trend toward being a strategic advisor and manager of IT assets, and away from being the “sole provider.”

Governance Needs for Cloud Services

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

David Linthicum started a debate when he posted a blog with the attention grabbing headline of “Cloud computing will kill these 3 technologies.” One of the technologies listed was “design-time service governance.” This led to a response from K. Scott Morrison, CTO and Chief Architect at Layer 7, as well as a forum debate over at eBizQ. I added my own comments both to Scott’s post, as well the eBizQ forum, and thought I’d post my thoughts here.

First, there’s no doubt that the run-time governance space is important to cloud computing. Clearly, a service provider needs to have some form of gateway (logical or physical) that requests are channeled through to provide centralized capabilities like security, billing, metering, traffic shaping, etc. I’d also advocate that it makes sense for a service consumer to have an outgoing gateway, as well. If you are leveraging multiple external service providers, centralizing functions such as digital signatures, identity management, transformations, etc. makes a lot of sense. On top of that, there is no standard way of metering and billing usage yet, so having your own gateway where you can record your own view of service utilization and make sure that it’s line with the what the provider is seeing is a good thing.

The real problem with Dave’s statement is the notion that design-time governance is only concerned with service design and development. That’s simply not true. In my book, I deliberately avoided this term, and instead opted for three timeframes of governance: pre-project, project, and run-time. There’s a lot more that goes on before run-time than design, and these activities still need to be governed. It is true that if you’re leveraging an external provider, you don’t have any need to govern the development practices. You do, however, still need to govern:

  • The processes that led to the decision of what provider to use.
  • The processes that define the service contract between you and the provider, both the functional interface and the non-functional aspects.
  • The processes executed when you add additional consumers at your organization of externally provided services.

For example, how is the company deciding what service provider to use? How is the company making sure decisions by multiple groups for similar capabilities are in line with company principles? How is the company making sure that interoperability and security needs are properly addressed, rather than being left at the whim of what the provider dictates? What happens when a second consumer starts using the service, yet the bills were being sent to the first consumer? Does the providers service model align with the company’s desired service model? Does the provider’s functional interface create undue transformation and integration work for the company? These are all governance issues that do not go away when you switch to IaaS, SaaS, or PaaS. You will need to ensure that your teams are aware of the contracts in place, and don’t start sending service requests without being properly onboarded into the contractual relationship. Your internal allocation of charges takes multiple consumers into account, if necessary. All of these must happen before the first requests are sent in production, so the notion that run-time governance is the only governance concern in a cloud computing scenario is simply not true.

A final point I’m adding on after some conversation with Lori MacVittie of F5 on Twitter. Let’s not forget that someone still needs to build and provide these services. If you’re a service provider, clearly, you still have technical, design-time governance needs in addition to everything else discussed earlier.

IT Needs To Be More Advisory

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

Sometimes a simple message can be the most powerful. In a recent discussion on a SOA Consortium phone call, I made the comment that IT needs to shift its mentality from provider to advisor. I hope that most people read this and view it as completely obvious, but recognizing the fact and actually executing against it are two different stories.

Let’s look at two trends that I think are pretty hard to argue against in most corporate IT organizations. The first trend is to build less stuff. Building less means that we’re either reusing stuff we already have, or acquiring stuff from some other source and configuring (not customizing) it to meet our needs. This could be free stuff, COTS, SaaS, etc. The point is, there is no software development required.

The second trend can be summed up as buy fewer (none) servers. Virtualization, data center consolidation/elimination, and cloud computing all have ties back to this trend, but the end result is that no one wants to build new data centers unless data centers are your business.

So, if you accept these trends as real, this means that IT isn’t providing applications and IT isn’t providing infrastructure. Then what is IT providing? I would argue that rather than looking for something new to “provide,” IT needs to change its fundamental thinking from provider to advisor or be at risk of becoming irrelevant.

Am I simply stating the obvious? Well, to some extent, I hope so. What should also be obvious, however, is that this change in role at an organizational level won’t happen by accident. While an individual may be able to do this, redefining an entire organization around the concept is a different story. A provider typically needs to only understand their side of the equation, that is, what they’re providing. If you’re a software developer, you understand software development, and you sit back and wait for someone to give you a software development task. Often times, the provider may establish some set offerings, and it’s up to the consumer to decide if those offerings meet their needs or not. An advisor, on the other hand, must understand both sides of the problem: the needs of the consumer and the offerings of the provider.

To illustrate this, take an example from the world of financial services. A broker may simply be someone you call up and say, “Buy 100 shares of APPL at no more than $200.” They are a provider of stock transaction services. A financial advisor on the other hand, should be asking about what your needs are, and matching those against the various financial offerings they have at their disposal. If they don’t understand client needs or if they don’t understand the financial offerings, you’re at risk of getting something sub-optimal.

IT needs to shift from being the technology provider to the technology advisor. Will people outside of IT continue to hand-pick technology solutions without the right breadth of knowledge? Sure, just as people today go out and buy some stock without proper thought of whether that’s really the right investment for them or just what everyone else is doing. The value add that IT needs to offer is the same that financial consultants do. We provide significantly better depth of understanding of the technology domains, along with the right amount of understanding of the business domains of our companies to advise the organization on technology decisions. For the non-IT worker that loves technology, it should be seen as validation of their efforts (not a roadblock).

The final message on all of this, however, is that I believe architecture plays a critical role. To actually build an advisory organization, you must categorize both the technology and the business into manageable domains where people can build expertise. Where does this categorization come from? Architecture. Taking a project-first approach is backwards. Projects should not define the categories and the architecture, the categories and desired architecture should define the projects.

So, you can see that this simple concept really does represent a fundamental shift in the way of thinking that needs to occur, and it’s one that’s not going to happen overnight. If you’re a CIO, it’s time to get this message out and start defining the steps you need to take to move your organization from a provider to an advisor.

Tibbr and Information in the Enterprise

Back in March of this year, I asked “Is Twitter the cloud bus?” While we haven’t quite gone there yet, Tibco has run with the idea of Twitter as an enterprise messaging bus and announced Tibbr. This is a positive step toward the enterprise figuring out how to leverage social computing technologies in the enterprise. While I think Tibco is on the right track with this, my pragmatist nature also sees that there’s a long way to go before these technologies achieve mainstream adoption.

The biggest challenge is creating the robust information pool. Today, the biggest complaint of newcomers to Twitter is finding information of value. It’s like walking into the largest social gathering you’ve ever seen and not knowing anyone. You can walk around and see if you overhear someone discussing something interesting, but that can be a daunting task. Luckily, however, there are millions of topics being discussed at any given time, so with the help of a search engine or some trusted parties, you can easily begin to build the network. In the enterprise, it’s not quite so easy.

When looking at information sharing, here are two key questions to consider:

  • Are new people receiving information that they would not otherwise have seen, and are they contributing back to the conversation?
  • Are the conversation groups the same, but the information content improved in either its relevance, timeliness, quality, preservation, or robustness?

If the answer to both of those is “no”, then all you’ve done is create a new channel for an existing audience containing information that was already being shared between them. Your goals must be to enable one or more of the following:

  • Getting existing information to/from new parties
  • Getting new information to/from existing parties
  • Delivering more robust/higher quality information to/from existing parties
  • Delivering information in a more timely/appropriate manner to existing parties
  • Making information more accessible (e.g. search friendly)

The challenge in achieving these goals via social networking tools begins with information sources. If you are an organization with 10,000 employees, only a small percentage of those employees will be early adopters. Strictly for illustrative purposes, let’s use IT department size as a reasonable guess to the number of early adopters. In reality, a lot of people IT will jump on it, as will a smaller percentage of employees outside of IT. A large IT department for a 10,000 person company would be 5%, so we’re looking at 500 people participating on the social network. Can you see the challenge? Are these 500 people merely going to extend the conversations they’re having with the same old people, or is the content going to meet the above goals?

Now comes along Tibbr, but does the inclusion of applications as sources of information improve anything? If anything, the way we’ve approached application architecture is even worse than dealing with people! Applications typically only share information when explicitly required by some other application. How many applications in your enterprise routinely post general events, without concern for who may be listening for them? Putting Tibbr or any other message bus in place is only going to be as valuable as the information that’s placed on the bus and most applications have been designed to keep information within its boundaries unless required.

So, to be successful, there’s really one key thing that has to happen:

Both people and applications must move from a “share when asked” mentality to a “share by default” mentality.

When I began this blog, I wasn’t directing the conversation at anyone in particular. Rather, I made the information available to anyone who may find it valuable. That’s the mentality we need in our organizations and the architecture we need in our applications. Events and messages can direct people to the appropriate gateway, with either direct access to the information, or instructions on how to obtain it if security is a concern. Today, all of that information is obscured at a minimum, and more likely locked down. Change your thinking and your architecture, and the stage is set for getting value from tools like Tibbr.

Book Review: Cloud Computing and SOA Convergence in Your Enterprise by David Linthicum

Full disclosure: I was provided a review copy of this book by the publisher free of charge. From the back cover: “Cloud Computing as SOA Convergence in Your Enterprise offers a clear-eyed assessment of the challenges associated with this new world–and offers a step-by-step program for getting there with maximum return on investment and minimum risk.”

My review in a nutshell: This is a very well-written, easy-to-read book, targeted at IT managers, that provides a robust overview of Cloud Computing and its relationship to SOA, and the core basics of a game plan for leveraging it.

This book was an extremely easy read, which is to be expected of any book from Dave, based upon the easy to read style of his InfoWorld blog. He provides a taxonomy of cloud offerings, extending the typical three categories (Software-as-a-Service, Platform-as-a-Service, and Infrastructure-as-a-Service) to eleven. While some may think eleven is too many, the fact remains that a taxonomy is necessary as a core starting point, and Dave provides solid definitions for each category that an organization can choose to use. He goes on to provide a financial model to consider for making your cloud decisions, but correctly states that cost is only one factor in the decision making process. He provides the other dimensions that should be associated with your decision making process in equal detail. In chapters 5 through 10, he walks through steps associated with moving services, data, processes, governance, and testing into a cloud environment. Dave’s steps for these chapters are very straightforward. That being said, Dave does not sugarcoat the fact that these steps are not always easy to do, and your success (or lack of it) is highly dependent on how large of a domain you choose to attack.

For someone who has researched SOA and Cloud Computing in detail, this book may not provide a lot of new information, but what it does provide, is a straightforward process for organizing your effort and making progress. Often times, that can be the biggest challenge. For this reason, I do think the book is more geared toward the management side of IT, and less so toward the technical side (architects and developers), but as an architect, I did find the taxonomies presented valuable. The only area for improvement that I saw would have been a stronger emphasis on the role the service model must play in the selection process, and stronger emphasis on having service managers inside your IT organization. Dave discussed both of these topics, however, to make stronger ties between SOA and Cloud Computing (or even ITIL and Cloud Computing), these points could have been emphasized more strongly. Choosing the right cloud provider requires that you have solid requirements on what you need, which comes from your service model. Ensuring that your requirements continue to be met and don’t get transformed into what the service provider would prefer to offer requires solid service management on your side.

Any cloud computing initiative will require that everyone involved have a base level of understanding of the goals to be achieved and the process for doing it. This book can help your staff gain that base understanding.

Architecture Governance

Mike Walker has had a series of good posts recently on the subject of architecture review boards, but this one in particular, which focuses on governance, caught my attention. In my SOA governance book, I made the obligatory analogies to municipal/federal government in the first chapter, but I didn’t go so far as to compare it directly to the three branches of government here in the United States. I’ve thought about doing this, but never quite put it all together. Thankfully, Mike did. In his post, the following parallels are drawn:

  • An architecture review board (ARB) is analogous to the Judicial arm of the US Government. It reviews individual projects and has the power to decline progress of those projects. Mike also adds that it performs enterprise wide decision making, but more on that later.
  • An architecture guidance team (AGT) is analogous to the Legislative arm of the US Government. It sets principles and policies, creates standards, and oversees the technology lifecycle.
  • Architecture Engagement Services (which typically includes the EA team) is analogous to the Executive arm of the US Government. It defines strategy, designs the enterprise architectures, and performs IT portfolio management.

I had some good conversations with my colleagues on this post and wanted to raise up some of the topics here. First, let’s come back to the role of the ARB/judicial branch in decision making. The ARB doesn’t make architecture decisions, the ARB verifies whether or not the decisions made by the project team follow the law. The only decision the ARB should make is whether the project can proceed forward or not. Mike goes on to state that there should be a clear separation of duties, and that senior IT decision makers should be in the ARB, while the AGT should be filled with SME’s. In my experience, this is normally not the case. I’ve typically seen approaches where the membership of these two groups overlaps significantly. I think this is directly attributable, however, to the effectiveness of the AGT.

One of my pet peeves is when the expectations of a review are not clear. Expectations are unclear when the policies and standards either don’t exist, aren’t widely communicated, or aren’t sufficient. When we get into these grey areas, a group solely focused on enforcement of standards will struggle. In other words, if there’s no laws that apply, one of two things will happen. One, you’ll get an “activist judge” who will interpret the law according to their own opinions, effectively setting policy rather than enforcing the law as written. Second, you’ll go by the letter of the law and deem the space to be uncovered, and as a result, the project can do whatever it wants. Most organizations don’t want the latter, so to hedge their bets, they put the policy makers as judges so they can provide new policy on the fly. That doesn’t bode well, in my opinion, because it now creates an environment where project reviews are likely to be painful, as the things called out will be based on the opinions of the review board, which are undocumented and cannot be anticipated, rather than on the standards of the company, which are documented and can be anticipated by a project team.

The second area I wanted to call out was the separation of the legislative effort from the executive effort. I really liked this separation, and just like an ineffective AGT impacts the ability of the ARB to do its job, an ineffective executive branch will make the AGT ineffective. Mike states that the AGT should consist of technology SMEs, and I agree with that, to a point. The inherent risk is that technology experts (and I’ve been in that role and probably guilty of this at times) can get caught up in advancing their technology rather than executing the strategy. If the AGT isn’t first focused on creating policies and standards that realize the strategy, they will be at risk of hitting gridlock in areas where multiple solutions exist. Take the current health care debate. The strategy has been made clear by the executive branch. If the legislative branch focuses too much on individual party ideology rather than on the strategy of establishing universal health care, gridlock will ensue (except that in this case, one party has a super-majority). The same holds true in the enterprise. Technology advocates can wind up in endless debate on their preferred platforms and completely lose sight of the strategy. At the same time, if the strategy is vague, then there’s no way the legislative branch can do its job. The AGT could set out to establish enterprise standards, but if the executive team isn’t clear on where enterprise standards should exist and where they should not, the wrong areas can be targeted, making adherence to those standards a challenge.

In short, I really like the three-branch model of governance proposed by Mike. It’s a triangle, the strongest structure. It’s strongest when each leg is the same length, working in balance. Make one of those legs smaller, and the other two must lengthen to pick up the slack. If your governance efforts are effective in all three areas, you will have a very strong architecture. If your efforts are ineffective in just one of these areas, you may have your work cut out for you.

Facebook for the Enterprise

In this blog on IT Business Edge, Dennis Byron discussed Facebook as an enterprise software company. His thoughts were based on a keynote address from Chris Hughes, co-founder of Facebook, given at the 2009 annual National Association for Multi-Ethnicity in Communications conference. Dennis stated that Chris indicated that Facebook is much more business friendly than what may have been the perception just two or three years ago. After reading it, it was my opinion that the position being advocated was the use of Facebook, as is, for corporate purposes. That already occurs today, but primarily as another B2C channel, used by marketing types. There are some pioneers out there doing more, but of the ones I’ve seen, it’s all about the customer/potential customer community.

In my opinion, viewing Facebook solely as a marketing/customer support channel is seriously limiting its use to an enterprise. The conversation should begin with an analysis of the communities that can be supported. Guess what? There’s a looming community with a very complicated social structure that exists within the walls of the company. Why can’t tools that are designed for enhancing communication and interaction between the social structures of society be applied within the walls of the enterprise?

Personally, I see this as having potential for a revolutionary change, rather than evolutionary change, in the way inter-company communication goes on and there’s a simple analogy to the world of SOA. Today, and even before the days of email, groupware, collaboration portals, etc., the primary model is still directed conversation. You’re either in the loop, or you’re not. To compare this to SOA, it’s a world where our focus is still on application A calling application B. What’s missing is support for undirected conversation. Combine SOA with EDA, and you have a much more powerful environment. The actions taken by application A and application B are events, and those events may be valuable to other applications. If those other applications have no visibility into those events, that value is left on the table. The same goes for our social interactions. If that information is kept private, there is value left on the table. You may counter, “It’s all out there in my SharePoint site,” but that doesn’t constitute an event-based system. While the information is there, it’s not conducive to searching, filtering, reading, etc. This is where an environment like Facebook has the potential to add much more than an email/IM/portal-based environment that is the norm today. The key is the news feed. It consists of messages from people (friends), but also from applications, coupled with more of the traditional collaboration tools of messaging, chat, file sharing, etc. While the Facebook news feed may not be quite as flexible as Twitter feeds, it’s clear that it’s headed in that direction. The key to success, however, is getting those events published. If applications don’t publish events, it’s hard to achieve the full potential of SOA and BPM. Add employees to that sentence, and it’s hard to achieve the full potential of the organization.

Unfortunately, asking a corporate enterprise to simply start using the public Facebook for these purposes is asking for too large of a leap. While I do think we will get to the point where the technology must allow the corporate communications to extend to parties outside the company, today, it’s still largely a private conversation. Requiring companies to fit their needs into the current consumer-driven, public environments is a big leap for old established companies. The right first step is an environment that packages up all the features of Facebook that are appropriate for a corporate environment and make that available initially as it’s own private world, but with a clear path to integration with the broader, public Facebook. This doesn’t mean that they’re installing it in their own data center, although that could be an option, it just means that it’s a walled garden for that company. It’s like the difference between Yammer and Twitter.

I’m looking forward to seeing more stories of companies leveraging social networking platforms inside their walls and then taking the next step of extending that to external communities as appropriate. I hope that we’ll see some case studies out of some large, established enterprises and not see adoption limited to the world of the new startups that begin with a culture built around these tools.

BPM and SOA Tool Linkage

I’ve been invited to participate in SearchSOA.com’s “Ask the Expert” series and will be fielding questions primarily on BPM technologies in the context of SOA, but I hope to see some EA related questions as well. My first response was posted on November 3rd, answering the question, “What is a key characteristic I should look for in BPM modeling tools, especially when looking to pair them with SOA?” You can read my response at SearchSOA.com.

Oracle OpenWorld: Larry Ellison Keynote

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

Larry is now on stage, starting out talking about Linux. Announcing Exadata 2, built in collaboration with Sun. He will then introduce the new product support system that will discover problems in our systems before we do.

Linux: Uptake of Oracle Enterprise Linux has been better than anticipated. Designed to be compatible with RedHat. We then developed our own VM, which is also open source and used by lots and lots of customers. Oracle’s belief is that eventually the VM and the operating system must work exceptionally well together, and must be engineered together. It is extremely useful if the VM has the same management tools as our operating systems. It lowers costs and dramatically improves overall reliability of the system. A survey done by HP asked customers that use Linux for Oracle database, which one do they run. 65% said Oracle Enterprise Linux. 37% said RedHat. 15% said SuSE.

Now on to the Exadata 2 database machine. Exadata 1 was Oracle’s first hardware product, specialized for data warehousing. According to Oracle’s tests, it was 10-50x faster than convention machines running the Oracle database. Larry presented customer quotes to back it up and anecdotally said that Apple (specifically his friend Steve Jobs) had similar results. Exadata 2 is targeted at OLTP and runs twice as fast as Exadata 1. He stated that it is the very first database machine that can do high performance transaction processing. Exadata 1 does random I/O very rapidly, making use of a huge semiconductor memory hierarchy. In a single box, it uses 400GB of DRAM and 5TB of flash cache memory. “Oh, and by the way, it’s completely fault tolerant.” He then said they leverage a custom compression algorithm to store large databases (e.g. 15TB) completely in semiconductor memory. He also said there’s another compression algorithm for queries that could allow a 50TB database to be stored completely in the 5TB of flash cache memory. He then discussed how the Exadata systems perform better than some of the in-memory database systems that are out there. He also mentioned that the system is configured, out-of-the-box. Wrapping up the Exadata 2 discussion, Larry said it is the fastest computer ever built for data warehousing applications. It is the only database computer for OLTP and does it in a cost-effective way, delivering record breaking performance at an attractive cost.

Interruption to the keynote while Arnold Schwarzenegger came on stage and delivered some great lines, the role of technology in his previous job as an action movie actor, and then spent a lot of time touting California’s technology. One good quote from him, “I say today, fear not … The best and brightest are working to solve the challenges of the 21st century.”

Now back to Larry and the new product support system. The slide states one unified support system, unifying My Oracle Support and Enterprise Manager. Larry said they will collect our configurations, hardware and software, upload them from Enterprise Manager into a global configuration database in My Oracle Support. Those databases will allow them to do proactive problem detection and recommend patches before we discover bugs that are in their software or other vendor’s software. Richard Sarwal came on stage to discuss/demo this approach as well as other advances in management technology built into Oracle Enterprise Manager. As someone who is passionate about effective operational management, it was nice to see the continued emphasis on the Oracle Enterprise Manager platform.

Larry then moved on to Fusion applications. He emphasized the role of SOA, BPEL, etc. in the construction of the applications. Version 1 scope includes financial management, human capital management, sales and marketing, supply chain management, project portfolio management, procurement management, and governance, risk, and compliance. He emphasized that it is the first suite of business applications built on standards-based middleware. The use of modular services (SOA) allows for unprecedented configurability by business users. A quote from Larry, “You assemble the components in the order that you want to use them.” He stated that it has a business intelligence driven user interface, leveraging business status (notifications), tasks and worklists, etc. Another quote from Larry, “We tell you what you need to know, what you need to do, and how to do it. If you can’t do it yourself, we tell you who in the organization … you need to collaborate with to get your job done.” At that point, the keynote switched over a demo of the Fusion apps with Steve Miranda and Chris Leone. The user interface of the HR system in particular really stood out to me as a very usable system.

With that, Larry thanked all of us and the keynote was closed.

Oracle OpenWorld: Seven Game Changing Trends: How Prepared Are You?

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

The keynote speaker is Kris Gopalakrishnan, co-founder and CEO of InfoSys, talking about the future of IT and innovation. He is stating that employee focus and IT focus is necessary for innovation, and must funded by efficiencies in operational processes. The 7 game changing trends of IT led innovation are:

  1. Simplification of complex business systems. Simplifying organizational complexity reduces business risks, frees up cash.
  2. Architecting an adaptive organization: Integrating processes, systems and metrics.
  3. Moving from value chains to value webs. The concept of existing in a discrete linear fashion, known as value chains, worked previously. In an economy of ideas, thoughts, info, ideas need to flow back and forth, up and down, sideways, etc. creating a value web. He stated that innovation can come through co-creation with customers.
  4. Smarter Organization: Better learning through collaboration and personalization. It ensures faster, better, cheaper adoption and utilization of systems.
  5. IT led innovation in healthcare: universal electronic healthcare records, creating personalized healthcare and medicines, software intensive medical device networks.
  6. IT led innovation for better banking: banking the unbanked (e.g. rural areas of India). He’s mentioning branchless, internet based banks, including some in India run by InfoSys. Next item is digital cash.
  7. Strategic partnering as your innovation weapon. It allows innovation to have a variable cost model, scaling with demand.

Oracle OpenWorld: SOA-Enabled BPM Adoption, Reference Architecture and Methodology Aspects

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

The speakers for this session were Manas Deb, Sr. Director, SOA/BPM/Governance Product Management from Oracle and Mark Wilkins, Enterprise Architect, EAP, from Oracle.

They framed up the session as a discussion based on best practices from their EA practice, focused on companies who have a goal to adopt to BPM, built on a foundation of services. They began with a quick recap on what they feel SOA and BPM are, with SOA being focused on encapsulation and loose coupling, and BPM being focused on improved efficiency. I’m not going to debate those definitions here, just repeated them to understand the context of the presentation.

They, like many others, extended the three tier model to insert processes between the presentation tier and the service tier. The one thing that they did do was to not claim the process layer was a new tier, rather, they presented it as an extension of the services tier. Obviously, the one risk with this is it immediately puts BPM into a technology context, rather than a business context. This isn’t a problem, but it shouldn’t be the sole focus of your BPM and SOA conversations. It may be the cornerstone for conversations with IT developers, engineers, and solution architects, but certainly not for analysts, business architects, and other non-IT staff.

The first slide on methodology is emphasizing what they are calling scopes. The examples shown include enterprise (cross-project) scope, project scope, and operations scope. At the enterprise scope, the interest is assessment, strategy, and planning. It performs value-benefits analysis, forms CoEs, establishes roadmaps and maturity models, plans the portfolio, establishes governance, etc. The project scope is execution and delivery focused, the operations scope is focused on measurements, scorecards, and keeping things running. It’s important to keep these scopes, or viewpoints, in mind and ensure that they all work together.

Mark went on to blow through a whole bunch of slides way too quickly. This should have been a 60 minute presentation. It appeared that there was some good content in there, including Oracle’s approach to the use of BPM and SOA conceptual reference architectures and how they eventually drive down to the physical view of the underlying infrastructure. He went on to show examples of the conceptual architectures for BPM and SOA, some information on a maturity model, a governance framework, and a few slides that tried to fit it all together. Once I’m able to download the slides, I’ll try to remember to come back and edit this post with the details. It’s unfortunate that a presentation that appeared to have very good content with appeal to architects got crammed into half the time frame of the other sessions.

Oracle OpenWorld: The Big BPEL-ESB-OSB cook-off

Full disclosure: I am attending Oracle OpenWorld courtesy of Oracle.

The speaker in this session is Andreas Chatziantoniou from Accenture.  He’s discussing the overlap between Oracle’s BPEL, ESB (legacy Oracle), and OSB (BEA ESB) products. 

First up is BPEL.  His slide states that BPEL should be used for system to system or service orchestration, when human workflow is needed, and when there are parallel request-response patterns.  The next slide says that BPEL should not be used for complex data transformations, it should not be used to program, and should not be used as a business modeling tool.  At first glance, this may seem strange, but I think it’s more of an indication that BPEL is something that gets generated by your tool, it’s not something people should be editing directly.  This point could be made more clearly.  He is emphasizing that you should not use BPELJ (embedded Java in BPEL).

He’s now talking about “dehydration,” a term I had not heard before.  He’s using that to refer to the writing of a process state to disk so it can be restored at a later time.  He stated that this is a natural part of BPEL, but not part of ESB/OSB.  I can live with that.  A service bus shouldn’t be doing dehydration any more than a network switch should be.

Now on to ESB/OSB.  His slide says they should be used for loose coupling, location transparency, mediation, error handling, transformation, load balancing, security, and monitoring.  Good list, although it does have the two grey areas of mediation and transformation.  You need to further define what types of mediation and transformation should and should not be done.  The way I’ve phrased it is that ESB’s should be about standards-in and standards-out.  As long as you’re mediating and transforming between standards (and the same standards on both sides), it’s a good fit.  If you are transforming between external and internal standards, as is the case in an external gateway, consider whether your ESB is the right fit for this since these mappings can get quite complicated. Those are my words, not the speakers, sorry this is something I’ve thought a lot about.

He’s now talking about mediation, and specifically referring to a component that existed in Oracle’s legacy ESB.  He said it connects components in a composite application.  To me, this does not belong in a service bus, and in the case of Oracle Service Bus, it does not.  He did not go into more detail on the type of mediation (e.g. security token mediation, message schema mediation, transport mediation).  As previously said, this needs to be made more narrow to make an appropriate decision on whether your mediation is really new business logic that belongs on a development platform, or mediation between supported standards than can be done by your connectivity infrastructure.

On transformation, Andreas focused more on what the platforms can do, rather on what they should do, calling out that XML transformations via XQuery, XSLT, et. can be equally done on any of the platforms.  His advice was do it in the service bus, and avoid mixed scenarios.  I’m really surprised at that, given how CPU-intensive transformations and mappings can be.  His point was that in a very large (50-60 steps) BPEL process, handling transformations could get ugly.  I see the logic on this, but I think if you do the analysis on where those transformations are needed, it may only be in one activity and best handled by the platform for that activity itself.

Overall, the speaker spent to much time discussing what the products can do, calling out overlaps, and not enough time on what they should do.   There was some good advice for customers, but I think it could have been made much simpler. My take on this whole debate  has always been straightforward.  A BPEL engine is a service development platform.  You use it to build new services that are most likely some composite of existing services.  I like to think of it as an orchestrated service platform.  As I previously said, though, you don’t write BPEL.  You use the graphical modeler for your tool, and behind the scenes, it may (or may not) be creating BPEL. 

A service bus is a service intermediary.  You don’t use it to build services, you use it to connect service consumers and service providers.  Unfortunately, in trying to market the service bus, most vendors succumbed to feature creep, whether due to creating their ESB from a legacy EAI product, or by adding more development like features to get more sales.  Think of it as a very intelligent router, meant to be configured by Operations, not coded by developers.

Oracle OpenWorld: Five Steps to Better SOA Governance with Oracle Enterprise Manager

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

I’m having to recreate this post thanks to a bug in WordPress for the iPhone which managed to eat a couple posts, so my apologies for it being a bit shorter than hoped, since I had to recall what I was typing live.

In this session, James Kao from Oracle presented five steps to improving SOA governance. The core premise that was emphasized throughout is that the use of metadata is becoming more and more prevalent in the development world, as it is necessary to increase the efficiency of our development efforts. Examples include SCA descriptors and BPEL. We will have a big problem, however, if the operational tools can’t keep up with these advances. This same metadata needs to be leveraged in the run-time world to improve our operational processes. I’ll add to this that while much of the metadata is coming out of the SOA and BPM technology space, this concept should not be limited to just those areas. The concept of having metadata that describes solutions for gains in both the design time world and the run time world is extremely important.

The five steps presented were:

  1. Assess. (sorry lost the details on this one)
  2. Discover. This is where the metadata created at design time is leveraged to set up appropriate run-time governance.
  3. Monitor. The systems must be instrumented appropriately, exposing metrics, in addition to leveraging external monitors to collect information about run-time behavior.
  4. Control. The four examples given here were policy management, service management, server/service provisioning, and change management. Clearly, this is the actionable step of the process. Based upon the data, we take action. Sometimes that action is reflected in changes to the infrastructure via provisioning and/or change management, sometimes that action is modifications to the policies that govern the systems.
  5. Share. Finally, just as the metadata from design time played a role in the run time world, the metrics collected at run time can play a role in other processes. The information must be shared into systems like Oracle BAM or Oracle ER to provide a feedback loop so that appropriate decisions can be made for future solutions.

I was very impressed with James’ grasp of the space. While this session presented concepts and not a live demonstration, if Oracle Enterprise Manager can make these concepts a reality in a usable manner, this could be a very powerful platform for companies leveraging the red stack. Excellent talk.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.