Friday, December 3, 2010

Case study: AIG insurance group leverages ALM to attain IT performance architecture advantage

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Barcelona -- Welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Barcelona.

We're here in the week of November 29, 2010 to explore some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

This customer case-study from the conference focuses on AIG-Chartis insurance and how their business has benefited from ongoing application transformation and modernization projects.

To learn more about AIG-Chartis insurance’s innovative use of IT consolidation and application lifecycle management (ALM) best practices, I interviewed Abe Naguib, Director of Global Performance Architecture and Infrastructure Engineering at AIG-Chartis in Jersey City, NJ.

Here are some excerpts:
Naguib: AIG is a global insurance firm, supporting worldwide international insurance of different varieties.

We're structured with 1,500 companies and roughly about eight lines of businesses that manage those companies. Each group has their own CIO, CTO, COO structure, and I report to the global CTO.

What we look at is supporting their global architecture and performance behavioristics, if you will. One of the key things is how to federate the enterprise in terms of architecture and performance, so that we can standardize the swing over into the Java world, as well as middleware and economy of scale.

When I came on board to standardize architecture, and I saw there was a proliferation of various middleware technologies. As we started going along, we thought about how to standardize that architecture.

As we faced more and more applications coming into the Java middleware world, we found that there’s a lot of footprint waste and there’s a lot of delivery cycles that are also slipped and wasted. So, we saw a need to control it.

After we started the architectural world, we also started the production support world and a facility for testing these environments. We started realizing, again, there were things that impacted business service level agreements (SLAs), economy of scale, even branding. So, we asked, how do we put it together?

One of the key things is, as we started the organizational performance, we were part of QA, but then we realized that we had to change our business strategy, and we thought about how to do that. One key thing is we changed our mindset from a performance testing practice to a performance engineering practice, and we've evolved now to performance architecture.

The engineering practice was focused on testing and analyzing, providing some kind of metrics. But, the performance architectural world now has influence into strategies, design practices, and resolution issues. We're currently a one-man or one-army team, kind of a paratrooper level. We're multi-skilled, from architecture, to performance, to support, and we drive resolution in the organization.

We also saw that resolution had to happen quickly and effectively. Carnegie Mellon did a study about five years ago and it said that post-live application resolution of performance issues was seven times the cost of pre-live [performance application resolution].

In other words, we realized that the faster we resolved issues, the faster to market, the faster we can address things, the less disruption to the delivery practices.

Too many people involved

In normal firefighting mode, architecture is involved, development is involved, and infrastructure is involved. What ends up happening is there are too many people involved. We're all scrambling, pointing fingers, looking at logs. So, we figured that the faster we get to resolution, the better for everyone to continue the train on the track.

... I have experience with Quality Center and the improvements that have gone on over the years. Because of our focus, we built our paradigm out of QA and into the performance world, and we started focusing on improving that process.

The latest TruClient product, which is a LoadRunner product, has been a massive groundbreaking point solution. In the last two years, frankly, with HP and Mercury getting adjusted, there’s been kind of a lag, but I have to give kudos to the team. [See more on HP's new ALM 11 offerings.]

One of the key things is that they have opened up their doors in terms of the delivery, in terms of their roadmap. I've worked extensively for the last roughly year with their product development team, and they have done quite a bit of improvement in their solution.

Good partnership role

They have also improved their service support model; the help desk actually resolves questions a lot faster. And we also have a good partnership role, and we actually work with things that we see, and to the influence of their roadmap as well.

This TruClient product has been phenomenal. One of the key things we're seeing now is BPM solutions are more Ajax-based, and there are so many varieties of Ajax frameworks out there than we know how to deal with. One of the key things with the partnership is that we're able to target what we need, they are able to deliver, and we are able to execute.

LoadRunner and TruClient allow us to get in front of the console, work with the business team, capture their typical use cases in a day-in-the-life scenario, and automate that. That gets buy-in and partnership with the business.

We're also able to execute a test case now and bring that in front of the IT side and show them the actual footprint from a business perspective and the impact and the benefits. What ends up happening is that now we're bringing the two teams together. So, we're bridging the gap basically from execution.

... We also started working with the CIOs to figure out a strategy to develop a service-level target, if you will. As we went along, we began working with the development teams to build a relationship with the architectural teams and the infrastructure teams.

We became more of a team model, building more of a peace-maker model. We regrouped the organization, so that rather than resolve and point fingers at each other, we resolved issues a lot faster.

Now, we're able to address the issue. We call it "isolate, identify, and resolve." At that point, if it’s a database issue, we work directly with the DBA. If it’s an infrastructure or architecture issue, we work directly with that group. We basically cut the cycle down in the last two or three years by about 70 percent.

A lot more CIOs have started bringing in more applications. We see a trend growth internally of roughly about 20-30 percent every year.



Because there is a change in our philosophy, in our strategy to focus more on business value, a lot more CIOs have started bringing in more applications. We see a trend growth internally of roughly about 20-30 percent every year.

I have a staff of nine. So, it’s a very agile, focused team, and they're very delivery-conscious. They're very business value-conscious, and we translate our data, the metrics that we capture, into business KPIs and infrastructure KPIs.

Because of that metric, the CIOs love what we do, because we make them look good with the business, which helps foster the relationship with the business, which helps them justify transformation in the future.

There is a new paradigm now, they call it the "Escalator Message." In 60 seconds or less, we can talk to a CIO, CTO, COO, or CFO about our strategy and how we can help them shift from the firefighting mode to more of an architecture mode.

If that’s the case, the more they can salvage their delivery, the more they can salvage their effective costs, and the more they can now shift to more of an IT-sensitive solutions shop. That helps build a business relationship and helps improve their economy of scale.

I would definitely send the message out to think in business value. Frankly, nobody really cares as much about the footprint cost, until they start realizing the dollars that are spent.

Also, now, business wants to see us more involved from the IT side, in terms of solutions, top-line improvements, and bottom-line improvements. As the performance teams expand and mature and we have the right toolsets, innovative toolsets like TruClient, we're able to now shift the cost of waste into a cost of improvements, and that’s been a huge factor in the last couple of years.

Last, I would say that in 8,000+ engagements -- we're actually closing in on now 10,000 events this year -- we've seen roughly $127 million in infrastructure savings that we have recouped. Again, that helps to benefit the firm. Instead of waste, now we're able to leverage that into more improvement side.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Case study: Enel Green Power uses PPM to gain visibility, orchestrate myriad energy activities across 16 countries

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Barcelona -- Welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Barcelona.

We're here in the week of November 29, 2010 to explore some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

This customer case-study from the conference focuses on Enel Green Power and how their Italian utility business has benefited from improved management of core business processes and gained visibility into new energy projects, also adhering to compliance through better planning and the ability to scope out new projects comprehensively. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

To learn about Enel Green Power’s innovative use of project and portfolio management (PPM), I interviewed Massimo Ferriani, CIO of Enel Green Power in Rome.

Here are some excerpts:
Ferriani: Enel Green Power is one of the leaders in the renewables market ... We're in all the most mature technologies such as hydro, geothermal, wind, and solar.

If you think about a matrix to cross technologies and countries, we have a lot of trouble, because we operate four technologies in 16 countries.

It's difficult because we have more than 300 plants all around the world. So, it's an asset portfolio that we have to operate, and we have to reduce the risks.

When we decided to deploy IT platforms, we didn’t think that it was a good idea to deploy conventional-generation IT platforms, but to build up new platforms more fitted to renewables' needs.

We thought about the main objective in deploying these platforms and said, "Okay, maybe we have to deploy platforms that permit us to minimize the portfolio risk, in order to know exactly what production should be." For us, knowing the production is a condition.

We have to know production, and we have to know exactly the production that we're promising to sell to the market.

The business strategy is to manage centrally and operate locally. IT had to follow the strategy. Our main IT platforms are developed with the objective to be global. Global doesn’t mean managing everything centralized, but to manage the IT platform as centralized, because it's better for synergies and in terms of costs. But, because we have to fit local needs, we we have to localize these platforms in 16 countries.

For PPM, as well, we decided to have a global, centralized, unique platform, in order to gather and collect all the data that we get from the field. This is one of the problems that we frequently have because, in effect, the operation is located everywhere. And, it’s not easy to collect information from each field operation.

It’s important to have global IT platforms, because one of our main objectives is that all our people have to work in the same way.



We have lot of plants in the middle of nowhere -- in the middle of the Nevada desert and in the middle of the Mato Grosso in Brazil. We have to gather information from these plants. So, it’s important to have global IT platforms, because one of our main objectives is that all our people have to work in the same way.

It’s also important to set the main goal of the PPM solution. Now, the PPM solution lets Enel Green Power manage its own worldwide portfolio initiatives, both business development side and the plant construction Phase 2, because we have to remember that the business development hands over the construction of the project.

We have to do it through building a unique centralized integrated platform, valuable to all the countries, designed to certify the market value of the pipeline and the potential future production related to that pipeline. For us, it's absolutely important to forecast better, to make budgets, and so on. It had to be designed to support people, our colleagues, in activities like planning, project development, reporting, document management, and so on.

Setting the main goals

So when we decided to deploy this platform, we had a lot of work for a couple of reasons.

First of all, because we wanted to develop an integrated in-house platform in order to map the ... core processes of the project, and at the same time to implement algorithms to develop a portfolio evaluation.

The second was to investigate adopting a standard solution available on the market that allowed us, with little customization, to fit the need of the business. It's important to underline that, when we started this project, it was the end of May, 2010. We already knew we were going to have an IPO. We didn’t know the time exactly, but we had to be ready for the end of October, the estimated date of the IPO.

We adopted the HP solution, because the HP people convinced us that with a minimal set of customization we would be ready for the end of October -- and we did it.

We chose HP because of the ... strong automation in the collection of the data. As I said before, also important for us were simplicity and flexibility. Also, with reference to our geographical distribution everywhere, the adoption of a solution supported with global support was another constraint and was absolutely important.

We needed a standard technology accessible from a lot of countries and with integration with other applications that we have, for example Microsoft Project. We also required scalability and platform growth -- and HP has a strength on this point -- because we are adopting a web service architecture. And, we wanted the viability of a unique homogenous view of mandating KPIs.

For us, the flexibility was one of the three main strengths on this platform and the reasons we chose HP.


We're only in the first phase in order to support the IPO and to support the certification of the market value of the pipeline. But, the main benefits of this platform for the business are acquisition and centralization of the data.

For us, the flexibility was maybe one of the three main strengths on this platform and the reasons we chose HP. But, the best one, as I said before, was the minimum customization we needed in order to fit the first phase. It’s not easy to have only three months time to set 64 workflows, because the local business development wants to fit their workflow on these needs.

It’s important for the automation to monitor all the steps of the workflow, of the individual steps of the process, to manage the workflow authorization of the individual steps, and monitor progress of the individual steps. All these data have to support us in order to plan the strategy. So, there are plenty of benefits and maybe more benefits in the future with the evolution of this platform.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, December 1, 2010

HP Software GM Jonathan Rende on how ALM enables IT to modernize businesses faster

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Barcelona -- Welcome to a special BriefingsDirect podcast from the HP Software Universe 2010 Conference in Barcelona, an interview with Jonathan Rende, Vice President and General Manager for Applications Business at HP Software.

We're here the week of November 29, 2010 to explore some major enterprise software and solutions, trends and innovations, making news across HP’s ecosystem of customers, partners, and developers. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Dana Gardner, Principal Analyst at Interarbor Solutions, moderated the discussion just after the roll-out of HP’s big application lifecycle management (ALM) news, the release of ALM 11. [See more on HP's new ALM 11 offerings.]

Here are some excerpts:
Rende: Over the last 25 years that I've been in the business, I've seen two or three such waves [of applications refresh] happen. Every seven to 10 years, the right combination of process and technology changes come along, and it becomes economically the right thing to do for an IT organization to take a fresh look at their application portfolio.

What’s different now than in the previous couple of cycles is that there is no lack of business applications out there. With those kind of impacts and requirements and responsibilities on the business, the agility and innovation of an application, is now synonymous with the agility and innovation of the applications themselves in the business.

It’s not really the case that the people building, provisioning, testing, and defining the applications are lacking or don’t know what they're doing. It’s mostly that the practices and processes they're engaged in are antiquated.

What I mean by that is that today, acquiring or delivering applications in a much more agile manner requires a ton more collaboration and transparency between the teams. Most processes and systems supporting those processes just aren’t set up to do that. We're asking people to do things that they don’t have the tools or wherewithal to complete.

Lifecycle roles

N
ot only are we bringing together -- through collaboration, transparency, linking, and traceability -- the core app lifecycle roles of business analysts, quality performance, security professionals, and developers, but we're extending that upstream to program management office and project managers. We're extending it upstream to architects. Those are very important constituents upstream who are establishing the standards and the stacks and the technologies that will be used across the organization.

Likewise, downstream, we're extending this to the areas of service management and service mangers who sit on help desks who need to connect. Their lifeblood is the connection with defects. Similarly, people in operations who monitor applications today need to be linked into all the information coming upstream along with those dealing with change and new releases happening all the time.

[ALM advances] extend upstream much further to a whole group of people -- and also downstream to a whole group of audiences.

Number one, they need to be able to share important information. There’s so much change that happens from the time an application project or program begins to the time that it gets delivered. There are a lot of changing requirements, changing learnings from a development perspective, problems that are found that need to be corrected.

All of that needs to be very flexible and iterative. You need those teams to be able to work together in very short cycles, so that they can effectively deliver, not only on time, but many times even more quickly than they did in the past. That’s what’s needed in an organization.

There isn’t a single IT organization in the world that doesn’t have a mixed environment, from a technology perspective.



On top of that, there isn’t a single IT organization in the world that doesn’t have a mixed environment, from a technology perspective. Most organizations don’t choose just Visual Studio to write their applications in -- or just Java. Many have a combination of either of those, or both of those, along with packaged applications off-the-shelf.

So, one of the big requirements is heterogeneity for those applications, and the management of those applications from a lifecycle approach should be accommodating of any environment. That’s a big part of what we do.

You have to be able to maintain and manage all of the information in one place, so that it can be linked, and so you can draw the right, important information in understanding how one activity affects another.

But that process, that information that you link, has to be independent of specific technology stacks. We believe that, over the past few years, not only have we created that in our quality solutions, in our performance solutions, but now we have added to that with our ALM 11 release -- the same concepts but in a much broader sense.

Integrating to other environments

B
y bringing together those core roles that I mentioned before, we've been able to do that from a requirements perspective, independent of [deployment] stack -- and from a development environment. We integrate to other environments, whether it’s a Microsoft platform, a Java platform, or from CollabNet. The use-cases that we've supported work in all of those environments very tightly -- between requirements and tests -- and pull that information all together in one place.

A business analyst or a subject matter expert who is generating requirements, captures all that information from what he hears of what’s needed, the business processes that need to built, the application, and the way it should work. He captures all of that information, and it needs to reside in one single place. However, if I'm a developer, I need to work off of a list of a set of tasks that build to those requirements.

It’s important that I have a link to that. It’s important that my priorities that I put in place then map to the business needs of those requirements. At the same time, if I'm in quality-, performance-, and security-assurance, I also need to understand the priority of those.

So, while those requirements will fit in one place, they'll change and they'll evolve. I need to be able to understand how that impacts my test plans that I am building.

With ALM 11, we're already seeing returns where organizations are able to cut the delivery time, the time from the inception of the project to the actual release of that project, by 50 percent.



If you look at some of the statistics that are thrown around from third parties that do this research on an annual basis: In almost two-thirds of projects today, application projects still fail. Then, you look at what benefits can be put in place, if you put together the right kind of an approach, system, and automation that supports that approach.

Cutting cost of delivery

We're seeing organizations similarly cut the cost of releasing an application, that whole delivery process -- cut the cost of delivery in half. And, that’s not to mention side benefits that really have a far more reaching impact later on, identifying and eliminating on creation up to 80 percent of the defects that would typically be found in production.

With ALM 11, we're already seeing returns where organizations are able to cut the delivery time, the time from the inception of the project to the actual release of that project, by 50 percent.

As a lot of folks who are close to this will know, finding a defect in production can be up to 500 times more expensive to fix than if you address it when it’s created during the development and the test process. Some really huge benefits and metrics are already coming from our customers who are using ALM 11.

Again, if you go back to the very beginning topic that we discussed, there isn’t a business, there isn’t a business activity, there isn’t a single action within corporate America that doesn’t rely on applications. Those applications -- the performance, the security, and the reliability of those systems -- are synonymous with that of the business itself.

If that’s the case, allowing organizations to deploy business critical processes in half the time, at half the cost, at a much higher level of quality, with a much reduced risk only reflects well on the business, and it’s a necessity, if you are going to be a leader in any industry.

There are so many different options of how people can deploy or choose to operate and run an application -- and those options are also available in the creation of those applications themselves. ALM 11 runs through on-premise deployment, or also through our software as a service (SaaS), so will allow flexibility.

Deep software DNA

S
oftware and our software business are increasingly important. If you look at the leadership within the company today, our new CEO has a very deep software DNA. Bill Veghte, who came in from Microsoft, has 20 plus years. The rest of the leadership team here also has 20 plus years in enterprise software.

Aside from the business metrics that are so beneficial in software versus other businesses, there is just a real focus on making enterprise software one of the premier businesses within all of HP. You're starting to see that with investments and acquisitions, but also the investment in, more importantly, organic development and what’s coming out.

So, it’s clearly top of list and top of mind when it comes to HP. Our new CEO, Leo Apotheker, has been very clear on that since he came in.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, November 30, 2010

HP's new ALM 11 helps guide IT through shifting landscape of modern application development and service requirements

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Barcelona the week of November 29, 2010. We're here to explore some major enterprise software and solutions, trends and innovations, making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

To learn more about HP’s application life-cycle management (ALM) news -- and its customer impact from the conference -- please welcome Mark Sarbiewski, Vice President of Product Marketing for HP applications. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Sarbiewski: The legacy approach is not going to be the right path for delivering modern applications. We’ve been hard at work for a couple of years now, recasting and re-inventing our portfolio to match the modern approach to software, going through them one-by-one.

You’ve got changes in how you are organized. You’ve got changes in the approach that people are taking. And, you’ve got brand-new technology in the mix and new ways of actually constructing applications. All of these hold great promise, but great challenges too. That's clashing with the legacy approach that people in the past took in building software.

We talk to our customers about this all of the time. It boils down to the same old changes that we see sort of every 10 years. A new technology comes into play with all its great opportunity and problems, and we revisit how we do this. In the last several years, it’s been about how do I get a global team going, focused on potentially a brand-new process and approach.

What are the new technologies that everybody is employing? We’ve got rich Internet technologies, Web 2.0 approaches and our technology is there. For composite applications, we’ve built a variety of capabilities that help people understand how to make the performance right with those technologies, keep the security and the quality high, while keeping the speed up.

So everything from how do we do performance testing in that environment to testing things that don’t have interfaces. And how do we understand the impact of change on the systems like that? We’ve built capabilities that help people move to Agile as a process approach, things like fundamentally changing how they can do exploratory testing, and how they can bring in automation much sooner in the process of performance, quality, and security.

Lastly, we’ve been very focused on creating a single, unified system that scales to tens of thousands of users. And, it’s a web-based system, so that wherever the team members are located, even if they don’t work for you, they can become a harmonious part of the overall team, 24-hour cycles around the globe. It speeds everything up, but it also keeps everyone on the same page. It’s that kind of anytime, anywhere access that’s just required in this modern approach to software.

How is software really supported?

When I talk to customers, I ask them, how they're supporting software. If we talk about software delivery, it's fundamentally a team sport. There isn't a single stakeholder that does it all. They all have to play and do their part.

When they tell me they’ve got requirements management in Microsoft Word, Excel, or maybe even a requirements tool, and they have a bug database for this, test management for that, and this tool here, on the surface it looks like they fitted everybody with a tool and it must be good. Right?

The problem is that the work is not isolated. You might be helping each individual stakeholder out a little bit, but you're not helping the team.



The problem is that the work is not isolated. You might be helping each individual stakeholder out a little bit, but you're not helping the team. The team’s work relates to each other. When requirements get created or changed, it's the ripple effect. What tests have to be modified or newly created? What code then has to be modified? When that code gets checked in, what tests has to be run? It’s the ripple effect of the work we talk about it as workflow automation. It's also the insight to know exactly where you are.

When the real question of how far am I on this project or what quality level am I at -- am I ready to release -- needs to be answered in the context of everyone’s work, I have to understand how many requirements are tested? Is my highest priority stuff working against what code?

So, you see the team aspects of it. There is so much latency in a traditional approach. Even if each player has their own tool, it's how we get that latency out and the finger-pointing and the mis-communication that also results. We take all that out of that process and, lo and behold, we see our customers cutting their delivery times in half, dropping their defect rates by 80 percent or more, and actually doing this more cheaply with fewer people.

In requirements management, one of the big new things that we’ve done is allow the import of business process models (BPMs) into the system. Now, we’ve got the whole business process flow that’s pulled right into the system. It can be pulled right from the systems like Eris or anything that’s putting in the standard business process modeling language (BPML) right into the system.

Business processes-focused

Now, everyone who accesses ALM 11 can see the actual business process. We can start articulating that this is the highest priority flow. This step of the business process, maybe it's check credit or something like that, is an external thing but it's super-important. So, we’ve got to make sure we really test the heck out of that thing. [See more on HP's new ALM 11 offerings.]

Everyone is aligned around what we’re doing, and all the requirements can be articulated in that same priority. The beautiful thing now about having all this in one place is that work connects to everything else. It connects to the test I set up, the test I run, the defects I find, and I can link it even back to the code, because we work with the major development tools like Visual Studio, Eclipse, and CollabNet.

It's hugely important that we connect into the world of developers. They're already comfortable with their tools. We just want to integrate with that work, and that’s really what we’ve done. They become part of the workflow process. They become part of the traceability we have.

What we hear from our customers is that the coolest new technology they want to work with is also the most problematic from a performance standpoint.

The bottom line is that the coolest new Web 2.0 front ends can now be very easily performance tested.



Modern requirements

We went back to the drawing board and reinvented how well we can understand these great new Web 2.0 technologies, in particular Ajax, which is really pervasive out there. We now can script from within the browser itself.

The big breakthrough there is if the browser can understand it, we can understand it. Before, we were sort of on the outside looking in, trying to figure out what a slider bar really did, and when a slider bar was moved what did that mean.

Now, we can generate a very readable script. I challenge anybody. Even a businessperson can understand, when they're clicking through an application, what gets created for the performance testing script.

We parameterize it. We can script logic there. We can suggest alternate steps. The bottom line is that the coolest new Web 2.0 front ends can now be very easily performance tested. So we don't end up in that situation where it's great, you did a beautiful rich job, and it's such a compelling interface, but only works when 10 people are hitting the application. We've got to fix that problem.

It speeds everything up, because it's so readable and quick. And it just works seamlessly. We've tested against the top 40 websites, and they are out there out using all this great new technology and it's working flawlessly.

Lots of pieces

If you think about a composite application, it's really made up of lots of pieces. There are application services or components. The idea is that if I’ve got something that works really well and I can reuse it as part of and combine it with maybe a few other things or in a couple of new pieces and I get new capability, I've saved money. I’ve moved faster and I'm delivering innovation to the business in a much better, quicker way and it should be rock-solid, because I can trust these components.

The challenge is, I'm not making up software made of lots of bits and pieces. I need to test every individual aspect of it. I need to test how they communicate together and I need to do end-to-end testing.

If I try to create composite apps and reuse all this technology, but it takes me ten times longer to test, I haven’t achieved my ultimate goal which was cheaper, faster and still high quality. So Unified Functional Testing is addressing that very challenge.

We've got Service Test which actually is incredible visual canvas for how I can test things that don't have an interface. One of the big challenges with something that doesn't have an interface is that I can't test it manually, because there are no buttons to push. It's all kind of under the covers. But, we have a wonderful, easy, brand-new reinvented tool here called Service Test that takes care of all that. [See more on HP's new ALM 11 offerings.]

That’s connected and integrated with our functional testing product that allows you to test everything end-to-end in the GUI level. The beautiful thing about our approach is you get to do that end-to-end, GUI level type of testing and the non-GUI stuff all from one solution and you report out all the testing that you get done.

Bring in a lot of automation to speed it up, keep the quality high and the time down low and you get to see it all kind of come together in one place.



So again, bring in a lot of automation to speed it up, keep the quality high and the time down low and you get to see it all kind of come together in one place.

Sprinter is not even a reinvention. It's brand-new thinking about how we can do manual testing in an Agile world. Think of that Instant-On world. It's such a big change when people move to an Agile delivery approach. Everyone on the team now plays kind of a derivative role of what they used to do. Developers take a part of testing, and quality folks have to jump in super-early. It's just a huge change.

What Sprinter brings is a toolset for that tester, for that person who is jumping in, getting right after the code to give immediate feedback, and it's a toolset that allows that tester to automatically figure out what test apps are supposed to go through to drop in data instead of typing it in. I don't have to type it anymore. I can just use an Excel spreadsheet and I can start ripping through screens and tests really fast, because I'm not testing whether it can take the input. I'm testing whether it processes it right. [See more on HP's new ALM 11 offerings.]

Cool tools

A
nd when I come across an error, there's a tool that allows me to capture those screens, annotate them, and send that back to the developer. What’s our goal when we find a defect? The goal is to explain exactly what was done to create the defect and exactly where it is. There are a whole bunch of cool tools around that.

The last point I’d make about this is called Mirror Testing. It’s super-important. It’s imperative that things like websites actually work across the variety of browsers and operating environments and operating systems, but testing all those combinations is very painful.

Mirror Testing allows the system to work in the background, while someone is testing, say on XP and Internet Explorer, five other systems, different combinations will be driven on the exact same test. I'm sitting in front of it, doing my testing, and in the background, Safari is being tested or Firefox. [See more on HP's new ALM 11 offerings.]

If there is an error on that system, I see it, I mark it, and I send it right away, essentially turning one tester into six. It's really great breakthrough thinking on the part of R&D here and a huge productivity bump.

What we hear from our customers is that they really do want their lives to be simplified, and the conclusion that they have come to in many cases is Post-It Notes, emails, and Word docs. It seems simpler at first and then it quickly falls apart at scale. Conversely, if you have tools that you can only work with in one particular environment, and most enterprises have a lot of those, you end up with a complex mess.

Companies have said, "I have a set of development tools. I probably have some SAP, maybe some Oracle. I’ve got built-in .NET, with Microsoft. I do some Eclipse stuff and I do Java. I’ve got those but if you can work with those and if you can help me get a common approach to requirements, to managing tests, functional performance, security, manage my overall project, and integrate with those tools, you’ve made my life easier."

When we talk about being environment agnostic, that’s what we mean. Our goal is to support better than anyone else in the market the variety of environments that enterprises have. The developers are happy where they are. We want them as part of the process, but we don’t want to yank them out of their environment to participate. So our goal again is to support those environments and connect into that world without disrupting the developer.

And, the other piece that you mentioned is just as important. Most customers aren’t taking one uniform approach to software. They know they’ve got different types of projects. I’ve got some big infrastructure software projects that I am not going to do all the time and I am not going to release every 30 days and a waterfall approach or a sequential approach is perfect for that.

Rock solid

I want to make sure it’s rock solid, that I can afford to take that type of an approach, and it's the right approach. For a whole host of other projects, I want to be much more agile. I want to do 60-day releases or 90-day releases or even more, and it makes sense for those projects. What I don’t want, they tell us, I don’t want every team inventing their own approach for Waterfall, Agile, or custom approaches. I want to be able to help the teams follow a best-practice approach.

As far as the workflow, they can customize it. They can have an Agile best practice, a Waterfall best practice, and even another one if they want. The system helps the team do the right thing and get a common language, common approach, all that stuff. That’s the process kind of agnostic belief we have.

The great news is that today you can download all the solutions that we’ve talked about for trials. We have some online demos that you can check out as well. There are a lot of white papers and other things. You can literally pull the software 30 minutes from now and see what I'm talking about.

On the licensing side, we believe that the simplest approach is a concurrent license, which we have on most of the products that we’ve got here. For all the modules that we’ve been talking about, if you have a concurrent license to the system, you can get any of the modules. And, it’s a nice floating license. You don’t have to count up everybody in your shop and figure out exactly who is going to be using what module.

The concurrent license model is very flexible, nice approach. It’s one we’ve had in the past. We're carrying it forward and we’ll look to continue to simplify and make it easier for customers to understand all the great capabilities and how to simply license so that they can get their teams to their modules for the capability they need.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

HP rolls out ALM 11 in Barcelona to expand managed automation for modern applications

Barcelona -- In the midst of what it calls a new wave of application modernization in the enterprise, HP on Tuesday rolled out the latest version of its application lifecycle management (ALM) platform here at the Software Universe conference.

The Application Lifecycle Management 11 platform works to automate application modernization from requirements management through quality and performance. HP sees this as an important innovation in a market where Forrester Consulting predicts 69 percent of IT decision-makers have earmarked 25 percent of their annual IT budget for application modernization—and 30 percent will dedicate over half their budget to the cause. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

“Sixty-seven percent of organizations that have kick-started application modernization projects are failing,” says Jonathan Rende, vice president and general manager of the Applications Solutions business for HP Software & Solutions division. “Application teams that have to build, provision and create new critical business processes can’t keep up because they are relying on the old ways of doing things instead of the new way.”

HP application transformation

The ALM 11 platform and software solutions are part of that “new way.” Components of the HP Application Transformation solutions, these tools work to help enterprises gain control over aging applications and inflexible processes that challenge innovation and agility -- by governing their responsiveness and pace of change. It’s all part of the Instant-On Enterprise that embeds technology into everything it does. ALM 11 essentially automates workflow processes across multiple teams. [See more on HP's new ALM 11 offerings.]

“Applications are central to everything CIOs are doing right now,” Rende says. “It’s literally how companies are differentiating themselves -- and doing so in more efficient and effective ways with more value added. ALM 11 creates a single, unified system that allows business analysts, developers, security professionals, quality professionals and performance professionals to collaborate.”

By establishing this set of criteria, everybody can see what is coming and what the status is, and why there are changes if there are changes.

[Read an interview with HP's Mark Sarbiewski on the uses and benefits of the new ALM portfolio.]

Rende also points to benefits such as risk-based decisions of application releases via ALM Project Planning and Tracking capabilities, rapid application delivery with HP Agile Accelerator 4.0, reduced business risk from application failures, and automatic import of business process models (BPM) into ALM’s Requirements Management to visualize business process flows and augment textual requirements.

Rend notes that HP isn’t working in a vacuum, either. “Everybody has a mix of different applications and environments. Many times they are cobbling them together and integrating because the business processes that are critical cut across many different systems,” Rende says. “Our solution is agnostic to the technologies.”

Release Management

A
nother major focus of ALM 11 is Release Management -- the ability for program and project managers to establish milestones and criteria and measurements in real-time. The module works to answer the questions, “What’s coming?” and “Is it ready?” or “Has it been tested successfully?”

“Many requirements for new apps come from production, and DevOps sit on that line between operations and applications,” Rende says. “By establishing this set of criteria, release milestones, and GANTT charts, everybody can see what is coming and what the status is, and why there are changes if there are changes.”

HP ALM platform also offers new versions of HP Quality Center and Performance Center 11. These solutions work to help simplify and automate application quality and performance validation to lower operational costs, freeing up investments to innovating applications in the delivery phase.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

The adaptive web: Helping to bridge the CIO-CMO divide

This guest post comes courtesy of Dr. Scott Brave, co-founder and CTO of Baynote, a provider of digital marketing optimization solutions. He can be reached at brave@baynote.com.

By Dr. Scott Brave

CIO and CMO. Until very recently, many still believed these two roles couldn’t be more extreme in their differences. The stereotypical CMO was creative and guided by gut feel, whereas the CIO was steadfast, risk averse and driven by empirical evidence.

The emergence of the real-time web and its impact on customer expectations has pulled these two seemingly polar opposite disciplines much closer together. Real-time services like Twitter, Facebook and improved behavioral targeting technologies are pushing consumer expectations for instant, extremely personalized experiences to an all-time high.

This trend has forced the CIO and CMO to work in lockstep on digital marketing initiatives aimed at staying as close as possible to what the customer wants.

However, the reality is that while both sides understand their shared goals depend on working with the other, the relationship between the CIO and CMO is more often not a happy marriage. According to the CMO Council’s recent CMO-CIO Alignment Imperative report, there is a good amount of consensus among CMOs and CIOs on the central role of technology in improving the customer experience, but neither group feels like they are getting the job done.

Biggest struggle

Their single biggest struggle has become all about figuring out ways to adapt the experience – across the web as well as via mobile and email – to seemingly insatiable user expectations. This sentiment is consistent with recent M&A activity that signals the importance of web optimization technology: Adobe acquired Omniture last September; and IBM has gobbled up CoreMetrics, Unica, and most recently, Netezza for $1.7B.

The reality, however, is that current optimization approaches are still very manual and provide a rear-view mirror look at customer intent, making it impossible to target the customer in an accurate and scalable way. Alas, the CIO/CMO dilemma continues.

What we need is to build a smarter approach that allows companies to adapt to their customers’ needs in real-time.



What we need is to build a smarter approach that allows companies to adapt to their customers’ needs in real-time. The concept of collective intelligence, which I’ll address below, will be critical to achieving this vision -- something I like to think of as an adaptive web.” That is, a digital experience that is always relevant and based on users’ current intent and interests. It also must be device-agnostic, especially important given the increased mobility of the online experience -- a challenge analyst firm Forrester calls “the Splinternet.”

The adaptive web is in fact central to what Gartner calls “Context-Aware Computing”, the idea that social analytics and computing will produce knowledge about individual context and preferences, allowing companies to predict and serve them what they want. According to Gartner, this model adapts interactions with the customer based on context, in contrast to today’s experience which is very reactive.

So, how close are we to building a truly end-to-end adaptive web?

To no surprise, there are numerous technical and psychological challenges for building an adaptive web. Namely, I see three primary roadblocks:
  • Privacy Issues: To deliver adaptive experiences, we have to pay attention to what people are doing online in the first place. Different users have varying levels of comfort. We’ll have to find some sort of middle ground where the value of an adaptive experience greatly outweighs users’ privacy concerns.

  • Deciding on the Method: Second, there’s determining the approach itself. Do we need a “metalayer” over the web? Some sort of toolbar or plug-in that could connect users’ entire web experiences across devices? Do ISPs need to get involved at the network level to watch every site users’ visit and how they engage with it? These are all options to consider – some more realistic than others - but the path is murky at best at this point.

  • Determining Users’ Intents: The third obstacle is the biggest obstacle of all: pure science. It’s not a trivial problem to automatically pinpoint and serve up an experience based on a user’s current intent and context. As someone who has devoted his life’s work to studying human/computer interaction, I can’t emphasize this enough. Predicting what people want and need, and adapting their web experience in real-time is perhaps one of the remaining “big picture” challenges facing technologists.
Collective intelligence

Let’s revisit collective intelligence and its role in making the adaptive web a reality. Collective intelligence refers to the process of gathering insight from a group of like-minded individuals online, often implicitly, based on their shared navigation and engagement patterns. A central concept of collective intelligence is to aggregate behaviors of the silent majority of visitors across the spectrum of digital channels, augment that information with the expertise of super-users and provide the most relevant information that meets every individual user’s goals.

Not doing so will have profound implications for their organizations, most notably lost revenues and customer loyalty.



An obvious benefit to using collective intelligence is one of mere scale: it enables machines to draw conclusions about an individual's current intent based on the knowledge and experiences of the larger community. It also gives us the power to efficiently deliver automated and real-time experiences to users. This would be very difficult within any user-by-user scenario, which again poses enormous difficulties in matters of scale.

The CIO and CMO understand why their success depends on better IT/marketing alignment. Now, the challenge will be for them to deliver. While there’s no silver bullet, I believe collective intelligence has the potential to help them form a much more harmonious and strategic partnership. CIOs and CMOs must formulate their strategies for collective intelligence, context-aware computing and other technologies enabling the adaptive web right away.

Not doing so will have profound implications for their organizations, most notably lost revenues and customer loyalty.
This guest post comes courtesy of Dr. Scott Brave, co-founder and CTO of Baynote, a provider of digital marketing optimization solutions. He can be reached at brave@baynote.com.
You may also be interested in:

ThinPrint works to take cloud printing to mainstream

With companies putting more applications and data into Internet clouds, cloud printing is gaining momentum in the enterprise.

Vendors large and small are getting into the game. HP has made major announcements while Google has hinted at the future. Apple has begun services for iOS devices. Smaller companies like HubCast and ThinPrint have entered the fray. Yet, for all the attention, though, cloud printing is still not mainstream.

BriefingsDirect recently caught up with Thorsten Hesse, manager of Innovative Products for ThinPrint, to discuss the business drivers of cloud computing, the various options available, and the obstacles to wider-spread adoption of the technology.

BriefingsDirect: What are the business drivers of cloud printing adoption?

Hesse: In general, talking about printing is quite boring for most people. But people want to print. They need to print. They don’t want to talk about it, but they want to use it. They just want it to work.

Companies spend a lot of money for new printers, for printer management and print driver administration, for unused print outs, unnecessary paper and toner consumption, and for support and help desk. Printing is one of the most cost-intensive things in IT. Many companies also don’t want to be locked in with a specific vendor.

Increasing use

Another aspect is the increasing use of cloud applications and services. How do you print from cloud offerings like Salesforce or Google Apps? Mostly you create a PDF. Well, then you need a device that can print PDFs. Additionally, the use of smartphones, tablets, and other mobile devices becomes more and more common, and these devices can‘t do that, or only in limited quality.

Altogether, there are at least six business drivers for cloud printing:
  • Printing is one of the most cost intensive IT services—and cloud printing can save cost and enhance productivity at the same time.
  • Printing technology today depends highly on printer manufacturers.
  • Companies want print on demand.
  • Companies use cloud applications, very often unplanned.
  • Employees are becoming increasingly mobile.
  • Employees use new types of devices.
BriefingsDirect: What are the different options for cloud printing in terms of delivery?

Hesse: There’re three different delivery models. First, there is private cloud software. The first delivery model is that we sell software to our customers that they install in their environment, for example in their data center or on an Amazon server in the cloud.

This might sound far off, but as soon as customers manage their internal desktops from the cloud with Microsoft Intune, it will be a logical step to do the same with the printers.



They buy, own, and control the software. The other end of the spectrum is a pure cloud printing service. And then in the middle we've got the hybrid cloud, where some parts are run internally in the private cloud and others in the public cloud.

BriefingsDirect: Is cloud printing secure? What makes is it secure?

Hesse: First of all, the user can print content without needing to store it on the device, which brings all the advantages of central data storage -- secure and updated data in one place, no files lost when device is lost, and availability of service. The user can trigger the print job to the printer. He can also identify the printer.

BriefingsDirect: How is cloud printing evolving?

Hesse: Our solution is evolving in many directions. On top of offering print management as a software product that the customer can purchase and install internally, we’ll offer it as a cloud service. This will be a public cloud service. Customers can run it from the cloud. They can then control their internal printing environment from the cloud.

This might sound far off, but as soon as customers manage their internal desktops from the cloud with Microsoft Intune, it will be a logical step to do the same with the printers. This will evolve into a complete print management solution that can then be used not only to control the printing environment, but to build in policies to enhance it along the way.

BriefingsDirect: What is holding businesses back from adopting cloud printing?

Hesse: They mostly don’t know what’s possible, as the discussion is fogged by limited public cloud printing solutions.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in: