Thursday, February 11, 2010

Smart Grid for data centers better manages electricity to slash IT energy spending, frees-up wasted capacity

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Nowadays, CIOs need to both cut costs and increase performance. Energy has never been more important in working toward this productivity advantage.

It's now time for IT leaders to gain control over energy use -- and misuse -- in enterprise data centers. More often than not, very little energy capacity analysis and planning is being done on data centers that are five years old or older. Even newer data centers don’t always gather and analyze the available energy data being created amid all of the components.

Finally, smarter, more comprehensive energy planning tools and processes are being directed at this problem. It reqiures a lifecycle approach from the data centers to more toward fuller automation.

And so automation software for capacity planning and monitoring has been newly designed and improved to best match long-term energy needs and resources in ways that cut total costs, while gaining the available capacity from old and new data centers.

Such data gathering, analysis and planning can break the inefficiency cycle that plagues many data centers where hotspots can mismatch cooling needs, and underused and under-needed servers are burning up energy needlessly. These so-called Smart Grid solutions jointly cut data center energy costs, reduce carbon emissions, and can dramatically free up capacity from overburdened or inefficient infrastructure.

By gaining far more control over energy use and misuse, solutions such as Hewlett Packard's (HP) Smart Grid for Data Center can increase capacity from existing facilities by 30-50 percent.

This podcast features two executives from HP to delve more deeply into the notion of Smart Grid for Data Center. Now join Doug Oathout, Vice President of Green IT Energy Servers and Storage at HP, and John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: Data center transformation (DCT) is focused on three core concepts, and energy is another key focus for that all to work. The drivers behind data center transformation are customers who are trying to reduce their overall IT spending, either flowing it to the bottom-line or, in most cases, trying to shift that spending away from management and maintenance and onto business projects.

We also see increasing mandates to improve sustainability. It might be expressed as energy efficiency in handling energy costs more effectively or addressing green IT.

DCT is really about helping customers build out a data center strategy and an infrastructure strategy. That is aligned to their business plans and goals and objectives. That infrastructure might be a traditional shared infrastructure model. It might be a fabric infrastructure model of which HP’s converged infrastructure is probably the best and most complete example of that in the marketplace today. And, it may indeed be moving to private cloud or, as I believe, some combination of the above for a lot of customers.

The secret is doing so through an integrated roadmap of data-center projects, like consolidation, business continuity, energy, and such technology initiatives as virtualization and automation.

Problem area

Energy has definitely been a major issue for data-center customers over the past several years. The increased computing capability and demand has increased the power needed in the data center. Many data centers today weren’t designed for modern energy consumption requirements. Even data centers that were designed even five years ago are running out of power, as they move to these dense infrastructures. Of course, older facilities are even further challenged. So, customers can address energy by looking at their facilities.

Increasingly, we're finding that we need to look at management -- managing the infrastructure and managing the facilities in order to address the energy cost issues and the increasing role of regulation and to manage energy related risk in the data center.

That brings us not only to energy as a key initiative in DCT, but on Smart Grid for Data Center as a key way of managing it effectively and dynamically.

Oathout: We're really talking about is a problem around energy capacity in data centers. Most IT professionals or IT managers never see an energy bill from the utility. It's usually handled by the facility. They never really concentrate on solving the energy consumption problem.

Where problems have arisen in the past is when a facility person says that they can’t deploy the next server or storage unit, because they're out of capacity to build that new infrastructure to support a line of business. They have to build a new data center. What we're seeing now is customers starting to peel the onion back a little bit, trying to find out where the energy is going, so they can increase the life of their data center.

To date, very few clients have deployed comprehensive software strategies or facility strategies to corral this energy consumption problem. Customers are turning their focus to how much energy is being absorbed by what and then, how do they get the capacity of the data center increase so they can support the new workloads.

What we're seeing today is that software, hardware, and people need to come together in a process that John described in DCT, an energy audit, or energy management.

All those things need to come together, so that customers can now start taking apart their data center, from an analysis perspective, to find out where they are either over-provisioned or under-provisioned, from a capacity standpoint, so they know where all the energy is going. Then, they can then take some steps to get more capability out of their current solution or get more capability out of their installed equipment by measuring and monitoring the whole environment.

Adding resources

The concept of converged infrastructure applies to data center energy management. You can deploy a particular workload onto an IT infrastructure that is optimally designed to run efficiently and optimally designed to continually run in an efficient way, so that you know you're getting the most productive work from the least energy and the more energy efficient equipment infrastructure sitting underneath it.

As workloads grow over time, you then have the auditing capability built into the software ... so that you can add more resources to that pool to run that application. You're not over-provisioning from the start and you're not under-provisioning, but you're getting the optimal settings over time. That's what's really important for energy, as well as efficiency, as well as operating within a data center environment.

You must have tools, software, and hardware that is not only efficient, but can be optimized and run in an optimized way over a long period of time.

Collect information

The key to that is to understand where the power is going. One of the first things we recommend to a client is to look at how much power is being brought into a data center and then where is it going.

What you want to do is start collecting that information through software to find out how much power is being absorbed by the different pieces of IT equipment and associate that with the workloads that are running on them. Then, you have a better view of what you're doing and how much energy you're using.

Then, you can do some analysis and use some applications like HP SiteScope to do some performance analysis, to say, "Could I match that workload to some other platform in the infrastructure or am I running it in optimal way?"

Over time, what you can do is you can migrate some of your older legacy workloads to more efficient newer IT equipment, and therefore you are basically building up a buffer in your data center, so that you can then go deploy new workloads in that same data center.

You use that software to your benefit, so that you're freeing up capacity, so that you can support the new workload that the businesses need.

The energy curve today is growing at about 11 percent annually, and that's the amount IT is spending on energy in a data center.



Bennett: That's really key, Doug, as a concept, because the more you do at this infrastructure level, the less you need to change the facilities themselves. Of course, the issue with facilities-related work is that it can affect both quality of service and outages and may end up costing you a pretty penny, if you have to retrofit or design new data centers.

Oathout: Smart Grid for Data Centers gives a CIO or a data-center manager a blueprint to manage the energy being consumed within their infrastructure. The first thing that we do with a Data Center Smart Grid is map out what is hooked up to electricity in the data center, everything from PDUs, UPSs, and error handlers to the IT equipment servers, networking and storage. It's really understanding how that all works together and how the whole topology comes together.

The second thing we do is visualize all the data. It's very hard to say that this server, that server, or that piece of facilities equipment uses this much power and has this kind of capacity. You really need to see the holistic picture, so you know where the energy is being used and understand where the issues are within a data center.

It's really about visualizing that data, so you can take action on it. Then, it's about setting up policies and automating those procedures to reduce the energy consumption or to manage energy consumption that you have in the data center.

Today, our servers and our storage are much more efficient than the ones we had three or four years ago, but we also add the capability to power cap a lot of the IT equipment. Not only can you get an analysis that says, "Here is how much energy is being consumed," you can actually set caps on the IT equipment that says you can’t use more than this. Not only can you monitor and manage your power envelope, you can actually get a very predictable one by capping everything in your data center.

You know exactly, how much the max power is going to be for all that equipment. Therefore, you can do much better planning. You get much more efficiency out of your data center, and you get more predictable results, which is one of the things that IT really strives for, from an SLA to getting those predictable results, day in and day out.

Mapping infrastructure

S
o, really Data Center Smart Grid for the infrastructure is about mapping the infrastructure. It's about visualizing it to make decisions. Then, it's about automating and capping what you’ve got, so you have better predictable results and you're managing it, so that you are not having out wires, you're not having problems in your data centers, and you're meeting your SLA.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Tuesday, February 9, 2010

AmberPoint finally gets acquired as Oracle fills in more remaining stack holes

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Thanks go out to Oracle on Feb. 8 for finally putting us out of our suspense. AmberPoint was one of a dwindling group of still-standing software independents delivering run-time governance of the for SOA environments.

It’s a smart move for Oracle as it patches some gaps in its Enterprise Manager offering, not only in SOA runtime governance, but also with business transaction management – and potentially – better visibility to non-Oracle systems. Of course, that visibility will in part depend on the kindness of strangers as AmberPoint partners like Microsoft and Software AG might not be feeling the same degree of love going forward.

We’re surprised that AmberPoint was able to stay independent for as long as it had, because the task that it performs is simply one piece of managing the run-time. When you manage whether services are connecting, delivering the right service levels to the right consumers, ultimately you are looking at a larger problem because services do not exist on their own desert island.

Neither should runtime SOA governance. As we’ve stated again and again, it makes little sense to isolate run-time governance from IT Service Management. The good news is that with the Oracle acquisition, there are potential opportunities, not only for converging runtime SOA governance with application management, but as Oracle digests the Sun acquisition, providing full visibility down to infrastructure level.

Transaction monitoring and optimization will become the next battleground of application performance management. . .



But let’s not get ahead of ourselves here as the emergence of a unified, Oracle on Sun turnkey stack won’t happen overnight. And the challenge of delivering an integrated solution will be as much cultural as technical, as the jurisdictional boundary between software development and IT operations blurs. But we digress.

Nonetheless, over the past couple years, AmberPoint itself has begun reaching out from its island of SOA runtime, as it extended its visibility to business transaction management. AmberPoint is hardly alone here as we’ve seen a number of upstarts like AppDynamics or Bluestripe (typically formed by veterans of Wiley and HP/Mercury), burrowing down into the space of instrumenting transactions from hop to hop. Transaction monitoring and optimization will become the next battleground of application performance management, and it is one that IBM, BMC, CA, HP, and Compuware are hardly likely to passively watch from the sidelines. [Disclosure: CA, HP and Compuware are sponsors of BriefingsDirect podcasts.]

Last one standing

As for whether run-time SOA governance demands a Switzerland-style independent vendor approach, that leaves it up to the last one standing, SOA Software, to fight the good fight. Until now, AmberPoint and SOA Software have competed for the affections of Microsoft; AmberPoint has offered an Express web services monitoring product that is a free plug-in for Visual Studio (a version is also available for Java); SOA Software offers extensive .NET versions of its service policy, portfolio, repository, and service manager offerings.

Nonetheless, although AmberPoint isn’t saying anything outright about the WebLogic (now Oracle's formerly BEA's) share of its 300-customer installed base, that platform was first among equals when it came to R&D investment and presence. BEA previously OEM’ed the AmberPoint management platform, an arrangement that Oracle ironically discontinued; well in this case, the story ends happily ever after. As for SOA Software, we would be surprised if this deal didn’t push it into closer embrace with Microsoft.

Postscript: Thanks to Ann Thomas Manes for updating me on AmberPoint’s alliances. They are/were with SAP, TIBCO Software, and HP, in addition to Microsoft. Their Software AG relationship has faded in recent years. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Of course all this M&A rearranges the dance floor in interesting ways. Oracle currently OEMs HP’s Systinet as its SOA registry, an arrangement that might get awkward now that Oracle’s getting into the hardware business. That will place into question virtually all of AmberPoint’s relationships.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Monday, February 8, 2010

Converged infrastructure approach paves way for improved data center productivity, private clouds

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Improved data center productivity now appears to be a natural progression from converged infrastructure. Many enterprise data centers have embraced a shared service management model to some degree, and now converged infrastructure applies the shared service model more broadly to leverage modular system design and open standards, as well as to advance proven architectural frameworks.

The result is a realignment of traditional technology silos into adaptive pools that can be shared by any application, as well as optimized and managed as ongoing services. Under this model, resources are dynamically provisioned efficiently and automatically, gaining more business results productivity. This also helps rebalance IT spending away from a majority of spend on operations and more toward investments, innovations, and business improvements.

This latest BriefingsDirect discussion explores the benefits of a converged infrastructure approach, and now how to better understand attaining a transformed data center environment. We'll see how converged infrastructure provides a stepping stone to private cloud initiatives. But, as with any convergence, there are a lot of moving parts, including people, skills, processes, services, outsourcing options, and partner ecosystems.

We're here with two executives from Hewlett-Packard (HP) to delve deeply into converged infrastructure and to learn more about how to get started and deal with some of the complexity, as well as to know what to expect as payoff. Please welcome Doug Oathout, Vice President, Converged Infrastructure at HP Storage, Servers, and Networking, and John Bennett, Worldwide Director, Data Center Transformation Solutions at HP. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: I often think of many CIOs as being at the heart of a vise, where, on one side, they have the business pressures. ... They need to support growth. They need to do a faster job of creating acquisitions. They need to spend more on business projects and innovation. They need to exploit technology for business advantage. They need to reduce costs.

On the other side of the vise are the constraints that they have in the environment that get in the way of them successfully addressing the business needs -- legacy infrastructure and applications and antiquated methods of managing the infrastructure that make it difficult to be responsive to change, or people with the skills that won’t serve modern technology's needs or environments.

Data-center transformation (DCT) helps enterprises implement a data center and infrastructure strategy that's aligned to their goals and objectives. The key here is that it's customer-driven, and it has to be built around the plans and directions of the targeted organization. This is clearly not a one-size-fits-all type of environment.

For many organizations, those strategies for infrastructure can include traditional shared infrastructure solutions or servers using virtualization and automation with shared storage environments. Increasingly, we've seen a natural evolution into a tighter integration of the capabilities and assets of the data center in the fabric infrastructure.

HP's Converged Infrastructure represents a pretty significant step forward in terms of benefits and capabilities for customers looking at having infrastructure strategy aligned to their future needs. The neat thing is that converged infrastructure can be the foundation for private cloud architectures.

Oathout: About two-thirds, if not 70 percent, of the IT operations budget is spent on maintaining IT and the IT workload within the data center.

When you have a recession, like we just experienced, what happens is that 30 percent spent on innovation or new workload placement gets cut immediately to help manage the budget within an organization. Therefore, in the last 18 months, very little innovation and few new projects were taken on by IT to support new business growth.

Now we have customers who are starting to spend again and who are starting to see the light at the end of the tunnel. They want their IT environment to be more flexible in the future. So, they're looking at their server and storage upgrades, and how they can implement converged infrastructure, so that the new infrastructure is more flexible and can adapt more to the requirements of the business.

As you're going through your technology refresh now, coming out of the recession, you can start implementing better and faster IT equipment. You can also use better and more efficient processes -- virtualization, automation, and management. When you put those pools of resources in place, you put them in a virtual environment so they can be shared among applications or can be transferred among applications when needed.

You are in the process now of creating pools of resources, versus dedicated silo resources, like you had prior to the recession, which couldn’t be reused for some of the application, and therefore you couldn’t support business growth.

The opportunity now is to break down those silos, give our customers the ability to share resources in the same footprint they have today, and actually become more efficient, so that when business changes or business needs change, they can adapt to the requirements of the business.

In a converged infrastructure environment, you really don’t want to care about the infrastructure you are putting it on. What you want to care about is that it's resilient, it's optimized, and it's modular, so it can grow and shrink with the application's demand.

Servers and storage lead the way

Let me give you an example. A server consolidation using virtualization and new server equipment will generally double or triple your capacity within your data center for the same footprint, just by getting the utilization of the servers up, better performance within the servers, and better capabilities within virtual environments. You can basically double or even triple the size of your capacity within your data center.

As you're going through your technology refresh now, coming out of the recession, you can start implementing better and faster IT equipment.



The same thing holds true for storage. Storage disk drives become twice as dense over a two- or three-year period. The performance of the drives gets better. So, for the same footprint in your data center you can actually fit twice as much storage.

... What you really have is a process change that's required between the IT application managers, the test and development people, and a team that actually runs the infrastructure. They need to talk more about standardization. They need to talk about how their IT comes together.

That's where the Data Center Transformation Workshop that John Bennett's team does helps. It gives you an architecture for future deployments, so that you have a converged infrastructure. You have pools of resources to put new applications down or revamp older applications onto a newer architecture, so it becomes more flexible.

You have to break down that silo or break down that fence between application deployments and what line of businesses are telling the application deployers and the people who run the infrastructure. Customers really do see that as a deployment barrier, but they're working through it, because there are significant benefits on the other side, just due to the fact that you increase agility, lower cost, and you have more money and more people to go do the innovation to support the workloads of future businesses.

Bennett: Good organizations are always rethinking IT. What are the organization's strategy, goals, and objectives? What is it going to take to realize those objectives? What capabilities do we need from IT in order to make those real? And then, how do we make them happen?

This is where the partnership between the technology team and the business team comes into play. The technology team will have more insights into how it can be exploited, and the key thing for the business is to make sure they specify their needs and not specify the answer.

... There's economic return to the organization from being able to roll out a new business service more quickly. There's an economic return to the business from being able to provision more resources when they are needed based on demand, so that demand doesn't disappear. There's a competitive business benefit, which is financial in nature, in being able to respond to competitive threats more quickly.

And a lot of the benefits of this are in the nature of direct cost savings -- the consolidation, modernization, and virtualization that Doug spoke to -- the savings from energy related projects and investments with Data Center Smart Grid, for example. All are easily quantifiable.


For more information on HP's Virtual Services, please go to: www.hp.com/go/virtualization and www.hp.com/go/services


Oathout: A cloud-computing environment is really an application-rich environment that allows you to bring more users on quickly and expand your capabilities and shrink your capabilities as you need them.

Converged infrastructure can be for public cloud, private cloud, or for a web workload or an high-performance computing (HPC) workload or an SAP workload. It doesn't really matter. A converged infrastructure is the optimal deployment of IT to support any kind of application, because it's modular in nature.

It has the flexibility to have more storage, more memory, less CPUs or more CPUs, less storage, or less memory, but it's all modular, so you can put the pieces together as you need them. So, it is a base support for either a cloud environment or a traditional IT environment. It really doesn't matter. It's designed to support both.

A private cloud is the IT department saying, "I'm now going to create a service catalog for my lines of business to develop upfront." You're getting software as a service (SaaS) now sitting on top of either a converged infrastructure or legacy infrastructure. A converged infrastructure is a lot easy to put SaaS on. But, you make that service catalog available to line of businesses, so they can turn on applications as they need them, very quickly.

Optimizing over time

Then, you can put more users on an enterprise resource planning (ERP) application, an online application, or a Web 2.0 application. IT is there as a support service now, setting that up, taking it down, and optimizing it over time, depending on the business needs.

So, private cloud is kind of that SaaS that sits on either a converged infrastructure or a legacy infrastructure or uniquely designed infrastructures that you get from some of the public cloud providers. Converged infrastructure is the optimal way to develop and deploy that in a standard data-center environment, and it's in support of a private cloud.

When you start bringing a storage and server and networking platforms together through a flexible fabric, the economies of scale of a shared resources and open systems is going to drive down the cost of acquiring IT. Then, with the software and the services capabilities that companies bring to market, they're going to bring the efficiencies along with them.

So, it is inevitable, starting with the simplest of workloads, moving to some of the hardest of workloads, that you are going to have a converged infrastructure. You are going to have application as a service, whether it's internal or external from a cloud provider, just because the economies of scale are there, and the ability to deploy the stuff is so simple once you get it set up that the efficiencies are also there besides the economies of purchase.

For example, a customer, the Dallas Cowboys, built a new football stadium in the Dallas area. It's a $1.4 billion investment. In the bottom of the thing is their data center. They run 30 different businesses out of the data center in the Dallas Cowboys stadium.

They have built it on a virtual environment. They have BladeSystems. They have the FlexFabric built into the environment. They went from over 500 servers down to 16 blades, with virtual machines running on them for the point of sale environment within the stadium. It drove a smaller footprint, but also the dynamics in the server and storage environment, so they can bring on new applications for the 30 businesses very quickly.

They changed their infrastructure to support their environment. ... They bring applications online and very reactive to the lines of businesses they are supporting. That's what a converged infrastructure really delivers, besides the lower economic cost that John and I have talked about. It's that efficiency to bring new opportunities to the lines of businesses, accelerate business growth, or increase customer satisfaction.

There are two ways to get started. They can contact one of HP’s business partners. Our business partners are enabled to do our Converged Infrastructure Maturity Model. Or, you can come to HP.com/go/ci, and it will take you to the landing page for a converged infrastructure.

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Sunday, February 7, 2010

BriefingsDirect analyst panelists peer into crystal balls for latest IT growth and impact trends

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

The next BriefingsDirect Analyst Insights Edition, Volume 49, hones in on the predictions for IT industry growth and impact, now that the recession appears to have bottomed out. We're going to ask our distinguished panel of analysts and experts for their top five predictions for IT growth through 2010 and beyond.

This periodic discussion and dissection of IT infrastructure related news and events with a panel of industry analysts and guests, comes to you with the help of our charter sponsor Active Endpoints, maker of the ActiveVOS business process management system.

To help us gaze into the IT trends crystal ball we are joined by our panel: Jim Kobielus, senior analyst at Forrester Research; Joe McKendrick, independent analyst and prolific blogger; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Dave Linthicum, CEO of Blue Mountain Labs; Dave Lounsbury, vice-president of collaboration services at The Open Group; Jason Bloomberg, managing partner at ZapThink, and JP Morgenthal, independent analyst and IT consultant. The discussion is moderator Dana Gardner, principal analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Shimmin: Mine are geared toward collaboration and conferencing. The first and most obvious is that clouds are going to become less cloudy. Vendors, particularly those in the collaboration space, are going to start to deliver solutions that are actually a blend of both cloud and on-premise.

We've seen Cisco take this approach already with front-ending some web conferencing to off-load bandwidth requirements at the edge and to speed internal communications. IBM, at least technically, is poised do the same with Foundations, their appliances line, and LotusLive their cloud-based solution.

With vendors like these that are going to be pulling hybrid, premise/cloud, and appliance/service offerings, it's going to really let companies, particularly those in the small and medium business (SMB) space, work around IT constraints without sacrificing the control and ownership of key processes and data, which in my mind is the key, and has been one of the limiting factors of cloud this year.

Number two: I have "software licensing looks like you." As with the housing market, it's really a buyer's market right now for software. It's being reflected in how vendors are approaching selling their software. Customers have the power to demand software pricing that better reflects their needs, whether it's servers or users.

I think the weapons will be user facing enterprise apps that work in concert with line-of-business solutions on the back-end.



So, taking cues from both the cloud and the open-source licensing vendors out there, we will see some traditional software manufacturers really set up a "pick your poison" buffet. You can have purchase options that are like monthly or yearly subscriptions or flat perpetual licenses that are based on per seat, per server, per CPU, per request, per processor, or per value unit, with a shout out at IBM there -- or any of the above.

You put those together in a way that is most beneficial to you as a customer to meet your use case. We saw last year with web conferencing software that you could pick between unlimited usage with a few seats or unlimited seats with limited usage. You can tailor what you pay to what you need.

Third for me is the mobile OS wars are going to heat up. I'm all done with the desktop. I'm really thinking that it's all about the Google Chrome/Android. I know there's a little bit of contention there, but Google Chrome/Android, Symbian, RIM, Apple iPhone, Windows Mobile, all those devices will be the new battle ground for enterprise users.

I think the weapons will be user facing enterprise apps that work in concert with line-of-business solutions on the back-end. We'll see the emergence of native applications, particularly within the collaboration space, that are capable of fully maximizing the underlying hardware of these devices, and that's really key. Capabilities like geo-positioning, simultaneous web invoice and, eventually, video are really going to take off across all these platforms this year.

Win or lose

But, the true battle for this isn't going to be in these cool nifty apps. It's really going to be in how these vendors can hopefully turn these devices into desktops, in terms of provisioning, security, visibility, governance, etc. That, to me, is going to be where they're going to either win or lose this year.

Four is "The Grand Unification Theory" -- the grand unification of collaboration. That's going to start this year. We're no longer going to talk about video conferencing, web conferencing, telepresence, and general collaboration software solutions as separate concerns. You're still going to have PBXs, video codecs, monitors, cameras, desk phones, and all that stuff being sold as point solutions to fill specific requirements, like desktop voice or room-based video conferencing and the like.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

But, these solutions are really not going to operate in complete ignorance of one another as they have in the past. Vendors with capabilities or partnerships spanning these areas, in particular -- I'm pointing out Cisco and Microsoft here -- can bring and will be bringing facets of these together technically to enable users to really participate in collaboration efforts, using their available equipment.

And last but not least ... Google Wave is really going to kick in in 2010. I may be stating the obvious, or I maybe stating something that's going to be completely wrong, but I really feel that this is going to be the year that traditional enterprise collaboration players jump head long into this Google Wave pool in an effort to really cash in on what's already a super-strong mind share within the consumer ranks.

Even though they have a limited access to the beta right now, there are over a million users of it, that are chunking away at this writing code and using Wave.

Of course, Google hosted rendition will excel in supporting consumer tasks like collaborative apps and role playing games. That's going to be big.

Linthicum: My top five are going to be, number one, cloud computing goes mainstream. That's a top prediction, I'm just seeing the inflection point on that.

I know I'm going out on the edge on this one. Go to indeed.com and do a search on the cloud-computing jobs postings. As I posted on my InfoWorld blog few weeks ago, it's going up at an angle that I have never seen at any time in the history of IT. The amount of growth around cloud computing is just amazing. Of course, it's different aspects of cloud computing, not just architecture with people who are cloud computing developers and things like that.

The Global 2000 and the government, the Global 1, really haven't yet accepted cloud computing, even though it's been politically correct for some time to do so. The reason is the lack of control, security concerns, and privacy issues, and, of course, all the times the cloud providers went down. The Google outages and the loss of stuff with T-Mobile, hasn't really helped, but ultimately people are gearing up, hiring up, and training up for cloud computing.

We are going to see a huge inflection point in cloud computing. This can be more mainstream in Global 2000 than it has been in the past. It's largely been the domain of SMBs, pilot projects, things like that. It's going to be a huge deal in 2010 and people are going to move into cloud computing in some way, shape, or form, if they are in an organization.

People are pushing back on that now. They’ve had it. They really don’t want all of their information out there on the Internet ...



The next is privacy becomes important. Facebook late last year pulled a little trick, where they changed the privacy settings, and you had to go back and reset your privacy settings. So, in essence, if you weren’t diligent about looking at the privacy settings within your Facebook account and your friends list, your information was out on the Internet and people could see it.

The reason is that they're trying to monetize people who are using Facebook. They're trying to get at the information and put the information out there so it's searchable by the search engines. They get the ad revenue and all the things that are associated with having a big mega social media site.

People are going to move away from these social media sites that post their private information, and the social media sites are going to react to that. They're going to change their policies by the end of 2010, and there's going to be a big uproar at first.

Cloud crashes

Next, the cloud crashes make major new stories. We've got two things occurring right now. We've got a massive move into the cloud. That was my first prediction. We have the cloud providers trying to scale up, and perhaps they’ve never scaled up to the levels that they are going to be expected to scale to in 2010. That's ripe for disaster.

A lot of these cloud providers are going to over extend and over sell, and they're going to crash. Performance is going to go down -- very analogous to AOL’s outage issues, when the Internet first took off.

We're going to see people moving to the cloud, and cloud providers not able to provide them with the service levels that they need. We're going to get a lot of stories in the press about cloud providers going away for hours at a time, data getting lost, all these sorts of things. It's just a matter of growth in a particular space. They're growing very quickly, they are not putting as much R&D into what these cloud systems should do, and ultimately that's going to result in some disasters.

N
ext, Microsoft becomes cloud relevant. Microsoft, up to now, has been the punch line of all cloud computing. It had the Azure platform out there. They've had a lot of web applications and things like that. They really have a bigger impact in the cloud than most people think, even though when we think of cloud, we think of Amazon, Google, and larger players out there.

Suddenly, you're going to see Microsoft with a larger share of the cloud, and they're going to be relevant very quickly.



With Azure coming into its own in the first quarter of next year in the rise of their office automation applications for the cloud, you are going to see a massive amount of people moving to the Microsoft platform for development, deployment, infrastructure, and the office automation application. The Global 2000 that are already Microsoft players and the government that has a big investment in Microsoft are going to move in that direction.

Suddenly, you're going to see Microsoft with a larger share of the cloud, and they're going to be relevant very quickly. In the small- and medium-sized business, it's still going to be the domain of Google, and state and local governments are still be going to be the domain of Google, but Microsoft is going to end up ruling the roost by the end of 2010.

Finally, the technology feeding frenzy, which is occurring right now. People see the market recovering. There is money being put back into the business. That was on the sidelines for a while. People are going to use that money to buy companies. I think there is going to be a big feeding frenzy in the service-oriented architecture (SOA) world, in the business intelligence (BI) world, and definitely in the cloud-computing world.

Lots of these little companies that you may not have heard about, which may have some initial venture funding, are suddenly going to disappear. Google has been taking these guys out left and right. You just don’t hear about it. You could do a podcast just on the Google acquisitions that have occurred this week. That's going to continue and accelerate in 2010 to a point where it's almost going to be ridiculous.

Lounsbury: I'm going to jump on the cloud bandwagon initially. We’ve seen huge amounts of interest across the board in cloud and, particularly, increasing discussions about how people make sense of cloud at the line-of-business level.

Another bold prediction here is that the cloud market is going to continue to grow, and we'll see that inflection point that Dave Linthicum mentioned. But, I believe that we're going to see the segmentation of that into two overarching markets, an infrastructure-as-a-service (IaaS) or platform-as-a-service market (PaaS) and software-as-a-service (SaaS) market. So that's my number one prediction.

We'll see the continued growth in the acceptance by SMBs of the IaaS and PaaS for the cost and speed reasons. But, the public IaaS and PaaS are going to start to become the gateway drug for medium- to large-size enterprises. You're going to see them piloting in public or shared environments, but they are going to continue to move back toward that locus of controlling their own resources in order to manage risk and security, so that they can deliver their service levels that their customers expect.

My third prediction, again in cloud, is that SaaS will continue to gain mainstream acceptance at all levels in the enterprise, from small to large. What you’ll see there is a lot of work on interfaces and APIs and how people are going to mash up cloud services and bring them into their enterprise architectures.

Of course all of this is set against the context that all distributed computing activities are set against, which is security and privacy issues.



This is actually going to be another trend that Dave Linthicum has mentioned as a blurring of a line between SaaS and SOA at the enterprise level. You’ll see these well on the way to emerging as disciplines in 2010.

The fourth general area is that all of this interest in cloud and concern about uptake at the enterprise level is going to drive the development of cloud deployment and development skills as a recognized job function in the IT world, whether it's internal to the IT department or as a consultancy. Obviously, as a consultancy, we look to the cloud to provide elasticity of deployment and demand and that's going to demand an elastic workforce.

So the question will be how do you know you are getting a skilled person in that area. I think you'll see the rise of a lot of enterprise-level artifacts such as business use cases, enterprise architecture tools, and analytic tools. Potentially, what we'll see in 2010 is the beginning of the development of a body of knowledge: practitioners in cloud. We'll start to recognize that as a specialty the way we currently recognize SOA as a specialty.

Of course all of this is set against the context that all distributed computing activities are set against, which is security and privacy issues. I don’t know if this is a prediction or not, but I wonder whether we're going to see our cloud harbor in 2010 its first big crash and the first big breach.

We've already mentioned privacy here. That's going to become increasingly a public topic, both in terms of the attention in the mainstream press and increasing levels of government attention.

There have been some fits and starts at the White House level about the cyber czar and things like that, but every time you turn around in Washington now, you see people discussing cyber security. How we're going to grow our capability in cyber security and increasing recognition of cyber security risk in mainstream business are going to be emerging hot topics of 2010.

Kobielus: Number one: IT is increasingly going to in-source much of BI development of reports, queries, dashboards, and the like to the user through mash up self-service approaches, SaaS, flexible visualization, and so forth, simply because they have to.

IT is short staffed. We're still in a recession essentially. IT budgets are severely constrained. Manpower is severely constrained. Users are demanding mashups and self-service capabilities. It's coming along big time, not only in terms of enterprise deployment, but all the BI vendors are increasingly focused on self-service solution portfolios.

Number two: The users who do more of the analytics development are going to become developers in their own right. That may sound crazy based on the fact that traditionally data mining is done by a cadre of PhD statisticians and others who are highly specialized.

Basically, we're taking data mining out of the hands of the rocket scientists and giving it to the masses through very user-friendly tools.



Question analysis, classification and segmentation, and predictive analytics is coming into the core BI stack in a major way. IBM’s acquisition of SPSS clearly shows that not only is IBM focusing there, but other vendors in this space, especially a lot of smaller players, already have some basic predictive analytics capabilities in their portfolios or plan to release them in 2010.

Basically, we're taking data mining out of the hands of the rocket scientists and giving it to the masses through very user-friendly tools. That's coming in 2010.

Number three: There will be an increasing convergence of analytics and transactional computing, and the data warehouse is the hub of all that. More-and-more transactional application logic will be pushed down to be executed inside of the data warehouse.

The data warehouse is a greater cloud, because that's where the data lives and that's where the CPU power is, the horse power. We see Exadata, Version 2 from Oracle. We see Aster Data, nCluster Version 4.0. And, other vendors are doing similar things, pointing ahead to the coming decade, when the data warehouse becomes a complete analytic application server in its own right -- analytics plus transaction.

Predictive analysis

Number four: We're seeing, as I said, that predictive analytics is becoming ever more important and central to where enterprises are going with BI and the big pool of juicy data that will be brought into predictive model. Much of it is coming from the whole Web 2.0 sphere and from social networks -- Twitters, Facebooks and the like, and blogs. That's all highly monetizable content, as Dave Linthicum indicated.

We're seeing that social network analysis has a core set of algorithms and approaches for advanced analytics that are coming in a big way to data mining tools, text analytics tools, and to BI. Companies are doing serious marketing campaign planning, optimization, and so forth, based on a lot of that information streaming in real-time. It's customer sentiment in many ways. You know pretty much immediately whether your new marketing campaign is a hit or a flop, because customers are tweeting all about it.

That's going to be a big theme in 2010 and beyond. Social network analysis really is a core business intelligence for marketing and maintaining and sustaining business in this new wave.

Right now, we're in the middle of a price war for the enterprise data warehousing stack hardware and software.



And, finally, number five: Analytics gets dirt cheap. Right now, we're in the middle of a price war for the enterprise data warehousing stack hardware and software. Servers and storage, plus the database licenses, query tools, loading tools, and BI are being packaged pretty much everywhere into appliances that are one-stop shopping, one throat to choke, quick-deploy solutions that are pre-built.

Increasingly, they'll be for specific vertical and horizontal applications and will be available to enterprises for a fraction of what it would traditionally cost them to acquire all those components separately and figure it out all themselves. The vendors in the analytics market are all going appliance. They're fighting with each other to provide the cheapest complete application on the market.

McKendrick: My number one trend is the impact of the economy. By all indications, 2010 is going to be a growth year in the economy. We're probably in this V shape.

See, I'm actually an optimist, not a pessimist. The world may end in 2012, but for 2010, we're going to have a great economy. It's going to move forward.

For this decade, we're looking forward to the rise of something called "social commerce," where the markets are user-driven and are conversations.

I think 2010 will be a year of growth.

Number two: Cloud computing. We’ve all been talking about that. That's the big development, the big paradigm shift. Clouds will be the new "normal." From the SOA perspective, we're going to be seeing a convergence. When we talk about cloud, we're going to talk about SOA, and the two are going to be mapped very closely together.

Dave Linthicum talks a lot about this in his new book and in his blog work. Services are services. They need to be transparent. They need to be reusable and sharable. They need to cross enterprise boundaries. We're going to see a convergence of SOA and cloud. It’s a service-oriented culture.

Number three: Google is becoming what I call the Microsoft of the clouds. Google offers a browser and email. It has a backend app engine. It offers storage. They're talking about bringing out an OS. Google is essentially providing an entire stack from which you can build your IT infrastructure. You can actually build a company’s IT infrastructure on the back of this. So, Google is definitely the Microsoft of the cloud for the current time.

Microsoft is also getting into the act as well with cloud computing, and they are doing a great job there. It’s going to be interesting to see what happens. By the way, Google also offers search as a capability.

Number four: We're going to see less of a distinction between service providers and service consumers over clouds, SOA, what have you. That's going to be blurring. Everybody will be providing and publishing services, and everybody will be consuming services.

You're going to see less of a distinction between providers and consumers. For example, I was talking to a reinsurance company a few months back. They offer a portal to their customers, the customers being insurance companies. They say that they offer a lot of analytics capabilities that their customers don’t have, and the customers are using their portal to do their own analytic work.

They don’t call it cloud. Cloud never entered the conversation, but this is a cloud. This is a company that’s offering cloud services to its consumers. We're going to see a lot of that, and it’s not necessarily going to be called cloud. You're not going to see companies saying, "We're offering clouds to our partners." It’s just going to be as the way it is.

Number five: In the enterprise application area, we've seen it already, but we're going to see more-and-more pushback against where money is being spent. As I said, the economy is growing, but there is going to be a lot of attention paid to where IT dollars are going.

I base this on a Harvard Medical School study that just came out last month. They studied 4,000 hospitals over a three-year period and found that, despite hundreds of millions of dollars being invested at IT, IT had no impact on hospital operations, patient care quality, or anything else.

Morgenthal: Number one: Cyber security. I am beginning to understand how little people actually understand about the differences between what security is and information assurance is, and how little people realize that their systems are compromised and how long it takes to eliminate threat within an organization.

Because of all of this connectedness, social networking, and cloud, a lot of stuff is going to start to bubble up. People who thought things were taken care of are going to learn that it wasn’t taken care of, and there will be a sense of urgency about responding to that. We're going to see that happen a lot in the first half of 2010.

Number two: Mobile. The mobile platforms are now the PC of yesterday, right? The real battle is for how we use these platforms effectively to integrate into people’s lives and allow them to leverage the platform for communications, for collaboration, and to stay in touch.

It seems everywhere I go, people are willing to spend a lot of money on their data plan. So, that’s a good sign for telecoms.



My personal belief is that it overkills information overlook, but that’s me. I know that everywhere I go, I see people using their iPhones and flicking through their apps. So, they hit upon a market segment, a very large market segment, that actually enjoys that. Whether small people like me end up in a cave somewhere, the majority of people are definitely going to be focused on the mobile platform. That also relates to the carriers. I think there still a carrier war here. We've yet to see AT&T and iPhone in the US break apart and open up its doors to other carriers.

Number three: Business intelligence and analytics, especially around complex event processing (CEP). CEP is still in an immature state. It does some really interesting things. It can aggregate and correlate. It really needs to go to that next step and help people understand how to build models for correlation. That’s going to be a difficult step.

As somebody was saying earlier, you had these little Poindexters sitting in the back room doing the stuff. There's a reason why the Poindexters were back there doing that. They understand math and the formulas that are under building these analytical models.

CEP and analytics -- and the two tied together. You’ll see that the BI, and data aspects of the BI, side will integrate with the CEP modeling to not only report after the fact on a bunch of raw data, but almost be proactive, and try to, as I said in my blog entry, know when the spit hits the fan.

Number four is collaboration. We’ve crossed the threshold here. People want it. They're leveraging it.

The labor market has not caught up to take advantage of these tools, design them, architect the solutions properly, and deploy and manage them.



I've been seeing some uptake on Google Wave. I think people are still a little confused by the environment, and the interaction model is not quite there yet to really turn it on its ear, but it clearly is an indication that people like large-scale interactions with large groups of people and to be able to control that information and make it usable. Google is somewhat there, and we'll see some more interesting models emerge out of that as well.

Number four is labor. We're at a point where the market is based on all these other things based on the cloud. We had a lot of disruptive technologies hit in the past five years -- enterprise mashups, SOA, and cloud computing. The labor market has not caught up to take advantage of these tools, design them, architect the solutions properly, and deploy and manage them.

I think that 2010 has to be a year for training, rebuilding, and getting some of those skills up. Today, you hear a lot of stories, but there is a large gap for any company to be able to jump into this. Skills are not there. The resources are not there and they are not trained. That's going to be a huge issue for us in 2010.

Bloomberg: I'm going to be a bit of the naysayer of the bunch. I just don't see cloud computing striking it big in 2010. When we talk to enterprise architects, we see a lot of curiosity and some dabbling. But, at the enterprise scale, we see too much resistance in terms of security and other issues to put a lot of investment into it. It's going to be gradually growing, but I don't see such a point coming as soon as you might like.

Small organizations are a different story. We see small organizations basing their whole business models on the cloud, but at the enterprise level, it's sort of a toe in the water, and we see that happening in the 2010.

Another thing we don't see really taking off in any big way is Enterprise 2.0. That is Web 2.0 collaborative technologies for the enterprise. You know, "Twitter On Steroids," and that kind of thing. Again, it's going to be more of a toe in the water thing. Collaborative technologies are maturing, but we don't see a huge paradigm shift in how collaboration is done in the enterprise. It's going to be more of a gradual process.

Another thing that we are not seeing happening in 2010 is CIOs and other executives really getting the connection between business process management (BPM) and SOA. We see those as two sides of the same coin. Architects are increasingly seeing that in order to do effective BPM you have to have the proper architecture in place. But, we don't see the executives getting that and putting money where it belongs in order to effect more flexible business process. So, this is another work in progress, and it's going to be a struggle for architects to make progress over the course of the year.

As far as the end of the recession, yeah, we're all hoping that the economy picks up, and I do see that there is going to be a lot of additional activity as a result of an improving economy, but I don't see a huge uptake in spending on software per se.

Spending in IT is going to go up, but in terms of what the executives going to invest in, they're going to be very careful about purchasing software. That's going to drive some money to cloud-based solutions, but that's still just a toe in the water as well.

Software vendors were hoping for a huge year, but they're going to be disappointed. It's going to be a growth year, but it's going to be moderate growth for the vendors.

Those are my first four. Those are the negatives. Not to be too negative, in terms of the positive, what we see happening in 2010 is increased focus on "MSW." You know what MSW is, right? Politely speaking it's "Make Stuff Work." Of course, you could put a different word in there for the S, but Make Stuff Work, that's what we see the architects really focusing on.

They have a good idea now of what SOA is all about. They have a good idea about how the technology fits in the story and the various technologies that have been mentioned on this call, whether it's analytics, data management, SaaS, and the cloud-based approaches. Now, it's time to get the stuff to work together, and that's the real challenge that we see.

SOA-Plus

The SOA story is no longer an isolated story. We're going to do SOA, let's go do SOA. But, it's SOA plus other things. So, we're going to do SOA, BPM, and the architecture driving that, despite the fact that the CIO may not quite connect the dots there.

SOA plus master data management (MDM) -- it's not one or the other now. It's how we get those things to work together. SOA plus virtualization. That's another challenge. Previously, those conversations were separate parts of the organization. We see more and more conversations bringing those together.

SOA and SaaS -- somebody already mentioned that SaaS is one segment of the cloud category. It's little more mature than the rest. We see more organizations understanding the connection between those two and trying to put them together. We'll do middleware and we'll do SOA, but we don't really see the connection where we confuse one for the other, and that was a big issue.

We're happy to call this services-oriented, even though the organization, as a whole, may call it variety of different things, depending on the perspective of the individual.

Baer: On cloud and virtualization, basically I agree with Jason, and I don't agree with David or with Joe. It’s not going to be the "new normal." We're going to see this year an uptake of all the management overhead of dealing with cloud and virtualization, the same way we saw with outsourcing years back, where we thought we'd just throw labor costs over the wall.

Secondly, JP, I very much believe that there is going to be convergence between BI and CEP this year. I agree with him that there's not going to be a surge of Albert Einsteins out there. On the other hand, I see this as a golden opportunity for vendors to package these analytics as applications or as services. That's where I really see the inflection curve happening.

Number three: Microsoft and Google. Microsoft will be struggling to stay relevant. Yes, people will buy Windows 7, because it's not Vista. That’s kind of a backhanded compliment to say, "We're buying this, because you didn't screw up as badly as last time." It doesn't speak well for the future.

Google meets a struggle for focus. I agree with Joe that they are aspiring to be the Microsoft of the cloud, but it may or may not be such a good thing for Google to follow that Microsoft model.

Finally, I agree with Jim that you are going to see a lot more business-oriented, whether it's BI, BPM, or IBM buying Lombardi. I hope they don't mess up Lombardi and especially I hope they don't mess up Blueprint. I've already blogged about that.

I very much believe that there is going to be convergence between BI and CEP this year.



One other point -- and I don't know if this fits into a top five or not -- but I found what Joe was talking about very interesting in terms of the let-down on health-care investment in IT. There's going to be lot a of pushing in electronic medical records (EMR) this year. I very much believe in EMRs, but, on the other hand, they are no panacea. We're going to see a trough of disillusionment happen on that as well.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Charter Sponsor: Active Endpoints.

You may also be interested in: