Wednesday, December 12, 2012

Global open-source vendors gain new leg up in selling to US agencies, thanks to favorable Talend ruling

Open-source provider Talend has received a favorable advisory ruling from the U.S. Customs and Border Protection (CBP) agency concerning the government's ability to purchase open-source software, opening the way for all software vendors to increase their share of business with US federal agencies.

The CBP has determined that software products comply with the Trade Agreement Act (TAA) when that software is manufactured in what is known as a "designated country," even if the majority of its source code was created in a non-designated country. [Disclosure: Talend is a sponsor of BriefingsDirect podcasts.]
Many software vendors -- whether they are open-source based or not -- will benefit from the ruling.

The US TAA says that government agencies may acquire only products or services produced in certain countries -- known as designated countries. This has sometimes hampered the agencies from acquiring open-source software if some of the code was developed outside of those countries, even when the majority of production took place inside designated countries.

“Country of origin” issues sometimes have been used as a pretext to make a case against the procurement of open-source software. Talend conducts the vast majority of its software production in the U.S., France or Germany, but like many manufacturers, it also seeks talent in countries that can fall outside those considered designated countries

"With this finding, any other company that meets the same criteria can get the same approval," said Yves de Montcheuil, Vice President of Marketing at Talend. "And then government buying can meet the trade agreement status. The process can now be easily repeated."

While governments around the world have been moving to embrace open source for a long time, adoption has been slow and inconsistent in the U.S., though it is steadily growing as more federal agencies revise their guidelines and regulations, and some states pass laws requiring the consideration of open-source options.

Useful guidance

"The Talend Ruling is significant because government users now have useful guidance specifically addressing open source software that is developed and substantially transformed in a designated country, but also includes, or is based upon, source code from a non-designated country," said Fern Lavallee, DLA Piper LLP, counsel to Talend. "The timing of this ruling is right given the Department of Defense’s well publicized attention and commitment to Better Buying Power and DoD’s recent Open Systems Architecture initiative."
 
"This is great news for everyone in the software industry," said Bertrand Diard, co-founder and CEO of Talend. "While the news is significant for Talend and offers an opportunity for us to address needs in the federal space, our belief is that many software vendors -- whether they are open-source based or not -- will benefit from the ruling."
This is great news for everyone in the software industry.

A copy of the advisory ruling can be obtained by emailing press@talend.com.

The U.S. Department of Defense (DoD) is currently and significantly revising the December 2011 draft of the “DoD Open Systems Architecture, Contract Guidebook for Program Managers.” The guidance document, expect by the end of 2012, helps DoD program managers use Open System Architecture principles for National Security Systems.

You may also be interested in:

Tuesday, December 11, 2012

Insurance leader AIG drives business transformation and IT service performance through center of excellence model

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how global insurance leader American International Group (AIG) has leveraged a performance center of excellence (COE) to help drive business transformation.

We learn in our discussion how AIG's Global Performance Architecture Group improved performance of their services to deliver better experiences and payoffs for businesses and end-users alike.

Here to explore these and other enterprise IT issues, we're joined by our co-host for this sponsored podcast, Chief Software Evangelist at HP, Paul Muller.

And we also welcome our special guest, Abe Naguib, Senior Director of AIG’s Global Performance Architecture Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Many organizations are now focusing more on the user experience and the business benefits and less on pure technology -- and for many, it's a challenge. From a very high level, how do you perceive the best way to go about a cultural shift, or an organizational shift, from a technology focus more toward this end-user experience focus?
The CIO has to keep his eye forward to periodically change tracks, ensuring that the customers are getting the best value for their money.

Naguib: There are several paradigms involved from the COO and CFO’s push on innovation and efficiency. A lot of the tooling that we use, a lot of the products we use, help to fully diversify and resolve some of the challenges we have. That’s to keep change running.

Abe Naguib
The CIO has to keep his eye forward to periodically change tracks, ensuring that the customers are getting the best value for their money. That’s a tall order and, he has to predict benefit, gauge value, maintain integrity, socialize, and evolve the strategy of business ideas on how technology should run.

We have to manage quite a few challenges from the demand of operating a global franchise. Our COE looks at various levels of optimization and one key target is customer service, and factors that drive the value chain.

That’s aligning DevOps to business, reducing data-center sprawl, validating and making sense of vendors, products, and services, increasing the return on investment (ROI) and total cost of ownership (TCO) of emerging technologies, economy of scale, improving services and hybrid cloud systems, as we isolate and identify the cascading impacts on systems. These efforts help to derive value across the chain and eventually help improve customer value.

Gardner: Paul Muller, does this jibe with what you're seeing in the field? Do you see an emphasis that’s more on this sort of process level, when it comes to IT with of course more input from folks like the COO and the chief financial officer?

Level of initiatives

Muller: As I was listening to Abe's description I was thinking that you really can tell the culture of an organization by the level of initiatives, and thinking that it has. In fact, you can't change one without changing the other. What I've just described is a very high level of cultural maturity.

Paul Muller
We do see it, but we see it in maybe 10 to 15 percent of organization that have gone through the early stages of understanding the performance and quality of applications, optimizing it for cost and performance, but then moving through to the next stage, reevaluating the entire chain, and looking to take a broader perspective with lots of user experience. So it's not unique, but it's certainly used among the more mature in terms of observational thinking.

Gardner: Tell us about AIG, its breadth, and particularly the business requirements that your Global Performance Architecture Group is tasked with meeting.

Naguib: AIG is a leading international insurance organization, across 130 countries. AIG’s companies serving commercial, institutional, individual customers, through one of the world’s most extensive property/casualty networks, are leading providers of life insurance and retirement services in the US.

Among the brand pillars that we focused on are integrity, innovation, and market agility across the variety of products that we offer, as well as customer service.
Bringing together our business-critical and strategic drivers across IT’s various segments fosters alignment, agility, and eventually unity.

With AIG’s mantra of "better, faster, cheaper," my organization’s people, strategy, and comprehensive tools help us to bridge these gaps that a global firm faces today. There are many technology objectives across different organizations that we align, and we utilize various HP solutions to drive our objectives, which is getting the various IT delivery pistons firing in the same direction and at the right time.
These include performance, application lifecycle management (ALM), and business service management (BSM), as well as project and portfolio management (PPM). Over time our Global Performance organization has evolved, and our senior manager realized our strategic benefit and capability to reduce cost, risk, and mitigate production and risk.

Our role eventually moved out of quality assurance's QA’s functional testing area to focus on emphasizing application performance, architecture design patterns, emerging technologies, infrastructure and consolidation strategies, and risk mitigation, as well increasing ROI and economy of scale. With the right people, process, and tools, our organization enabled IT transparency and application tuning, reduced infrastructure consumption, and accelerated resolution of any system performances in dev and production.

The key is bringing together our business-critical and strategic drivers across IT’s various segments fosters alignment, agility, and eventually unity. Now, our leaders seek our guidance to help tune IT at some degree of financial performance to unlock optimal business value.

Culture of IT

Gardner: Is that a pattern you're seeing, that the people in QA are in the sense breaking out of just an application performance level and moving more into what we could call IT performance level?

Naguib: In the last six or seven years, there's less focus on just basic performance optimization. The focus is now on business strategy impact on infrastructure CAPEX, and OPEX. Correlating business use cases to impact on infrastructure is the golden grail.
I always say that software drives the hardware.

Once you start communicating to CIOs the impact of a system and the cost of hosting, licensing, headcount, service sprawl, branding, and services that depend on each other, we're more aligning DevOps with business.

Muller: I just had a conversation not three weeks ago with a financial institution in another part of the world. I asked who is responsible for your end-to-end business process -- in this case I think it was mortgage origination -- and the entire room looked at each other, laughed, and said "We don't know."

So you've really got this massive gap in terms of not just IT process maturity, but you also have business-process maturity, and it's very challenging, in my experience, to have one without having the other.

Gardner: I think we have to recognize too that most businesses now realize that software is such an integral part of their business success. Being adept at software, whether it's writing it, customizing it, implementation and integration, or just overall lifecycle has become kind of the lifeblood of business, not just an element of IT. Do you sense that, Abe, that software is given more clout in your organization?

Naguib: Absolutely Dana. I truly believe that. I've been kind of an internal evangelist on this, but I always say that software drives the hardware. Whether I communicate with the enterprise architects, the dev teams, the infrastructure teams, software frankly does drive the hardware.

That's really the key point here. If you start managing your root cost and performance from a software perspective and then work your way out, you’ve got the key to unlocking everything from efficiencies to optimizing your ROI and to addressing TCO over time. It's all business driven. Know your use cases. Know how it impacts your software, which impacts your infrastructure.

Converged infrastructure

Gardner: Just being productive for its own sake isn’t good enough in this economy. We have to show real benefits, and you have to measure those benefits. Maybe you have some way to translate how this actually does benefit your customers. Any metrics of success you can share with us, Abe?

Naguib: Yes, during our initial requirements-gathering phase with our business leaders, we start defining appropriate test-modeling strategy, including volumetrics, and managing and understanding the deployment pattern with subscriber demographics and user roles. We start aligning DevOps organizations with business targets which improves delivery expectations, ROI, TCO, and capacity models.
The big transformation taking place right now is that our organization is connecting different silos of IT delivery, in particular development, quality, and operations.

Then, before production, our Application Performance Engineering (APE) team identifies weak spots to provide the production team with a reusable script setting thresholds on exact hotspots in a system, so that eventually in production, they can take appropriate productive measures. Now, this is value add.

Muller: As we’re seeing across the planet at the moment, there's a recognition that to bring great software and information is really a function of getting Layers 1 through 7 in the technology stack working, but it's also about getting Layer 8 working. Layer 8, in this case, is the people. Unfortunately, being technologists, we often forget about the people in this process.

What Abe just described is a great representation of the importance of getting not just a functional part of IT, in this case quality and performance working well, but it's about recognizing the software will one day be delivered to operational staff to internally monitor and manage it in a production setting.

The big transformation taking place right now is that our organization is connecting different silos of IT delivery, in particular development, quality, and operations, to help them accelerate the release of quality applications, and to automate things like threshold setting, and optimize monitoring of metrics ahead of time. Rather than discovering that an application might fail to perform in a production setting, where you've got users screaming at you, you get all of that work done ahead of time.

Sharing and trust

You create a culture of sharing and trust between development, quality, and operations that frankly doesn’t exist in a lot of process where the relationship between development and operations is pretty strained.

Gardner: Abe, how do you measure this? We recognized the importance of the metrics, but is there a new coin of the realm in terms of measurement? How do you put this into a standardized format that you’re going to take to your CFO and your COO and say here’s what's really happening?

Naguib: That's a good question. Tying into what Paul was saying, nobody cared about whether we improved performance by three seconds or two seconds. You care at the front end, when you hear users grumbling. The bottom line is how the application behaves, translating that into business impact as well as IT impact.

Business impact is what are the dollar values to make key use cases and transactions that don't scale. Again, software drives the hardware. If an application consumes more hardware, the hardware is cheap now-a-days, but licenses aren’t. You have database and you have middleware products running in that environment, whether it's on-premise or in the cloud.

The point is that impact should be measured, and that's how we started communicating results through our organization. That's when we started seeing C-level officers tuning in and realizing the impact of performance of both to the bottom line, even to the top line.
We were able to leverage consistent dashboards across different IT solutions internally, then target weak spots and help drive optimization.

Our role is to provide more insight earlier and quicker to the right people at the right time.
Leveraging HP’s partnership and solutions helped us to address technologies, whether Web 2.0, client-server, legacy systems, Web, cloud-based, or hybrid models. We were able to leverage consistent dashboards across different IT solutions internally, then target weak spots and help drive optimization, whether on premise or cloud.

Muller: In the enterprise today, it's all about getting your ideas out of your head and making them a reality. As Abe just described, most of the best ideas today that are on their way into business processes you can ultimately turn into software. So success is really all about having the best applications and information possible.

Understand maturity

The challenge is understanding how the technology, the business process and the benefits come together and then orchestrating that the delivery of that benefit to your organization. It's not something that can be done without a deliberate focus on process. Again, the challenge is always understanding your organization's maturity, not just from an IT standpoint, but importantly from a broader standpoint.

Naguib: What's the common driver for all? Money talks. Translating things into a dollar value started to bring groups together to understand what we can do better to improve our process.

What we're seeing more is that it's not just internal dev and ops that we're aligning with, or even our business service level expectations. It's also partnerships with key vendors that have opened up the roadmap to align our technologies, requirements, and our challenges into those solutions.
The gains we make are simple. They can be boiled down into three key benefits: savings, performance, and business agility. Leveraging HP's ALM solutions helps us drive IT and business transformation and unlock resources and efficiencies. That helps streamline delivery and an increased reliability of our mission critical systems.
After we've dealt with tuning, we can help activate post-production monitoring using the same script, understanding where the weak spots are.

My favorite has always been HP's LoadRunner Performance Center. It’s basically our Swiss Army Knife to support diverse platform technologies and align business use cases to the impact on IT and infrastructure via SiteScope, HP SiteScope.

We're able to deep dive into the diagnostics, if needed. And the best part is, after we've dealt with tuning, we can help activate post-production monitoring using the same script, understanding where the weak spots are.

So the tools are there. The best part is integrated, and actually work together very well.

Gardner: It really sounds like you've grabbed onto this system-of-record concept for IT, almost enterprise resource planning (ERP) for IT. Is that fair?

Naguib: That's a good way to put it.

Muller: One of the questions I get a lot from organizations is how we measure and reflect the benefit. What hard data have you managed to get?

Three-month study

Naguib: IDC came in and did an extensive three-month study, and it was interesting what they have found. We've realized a saving of more than $11 million annually for the past five years by increasing our economy of scale. Scale on a system allows more applications on the same host.

It's an efficiency from both hardware and software. They also found that our using solutions from HP increased staff productivity by over $300,000 a year. Instead of fighting fires, we're actually now focusing on innovation, and improving business reliability by over $600,000 a year.

So all that together shows a recoup, a five-year ROI, about 577 percent. I was very excited about that study. They also showed that we resolved mean time resolution over 70 percent through production debugging, root cause, and resolution efforts.

So what we found, and technologists would agree with me, is that today, with hardware being cheaper than software, there is a hidden cost associated with hosting an application. The bottom line, if we don’t test and tune our applications holistically, either the architecture, code, infrastructure, and shared services, these performance issues can quickly degrade quality of service, uptime, and eventually IT value.
I have a saying, which is that quality costs money but bad quality costs more.

Gardner: Abe, any recommendations that you might have for other organizations that are thinking of moving in this direction and that want to get more mature, as Paul would say. What are some good things to keep in mind as you start down this path?

Naguib: Besides software drives the hardware -- and I can't stress that enough -- are all the ways to understand business impact and translate whatever you're testing into the business model.

What happens to the scenarios such as outages? What happens when things are delayed? What is the impact on business operability, productivity, liability, customer branding. There are so many details that stem from performance. We used to be dealing with the "Google factor" of two-second response time, but now, we're getting more like millisecond response, because there are so many interdependencies between our systems and services.

Another fact is that a lot of products come into our doors on a daily basis. Modern technologies come in with a lot of promises and a lot of commitments.

Identify what works

So it's being able to weed through the chaff, identify what works, how the interdependencies work, and then, being able to partner with vendors of those solutions and services. Having tools that add transparency into their products and align with our environment helps bring things together more. Treating IT like a business by translating the impact into dollar value, helps to get lined up and responsive.

Muller: It might be a little controversial here, but the first step for progress on all of this is look in the mirror and understand your organization and its level of maturity. You really need to assess that very self-critically before you start. Otherwise, you're going to burn a lot of capital, a lot of time, and a lot of credibility trying to make a change to an organization from state A to state B. If you don’t understand the level of maturity of your present state before you start working on the desired state, you can waste a lot of time and money. It's best to look in the mirror.

The second step is to make sure that, before you even begin that process, you create that alignment and that desired state in the construct of the business. Make sure that your maturity aligns to the business's maturity and their goal. I just described the ability to measure the business impact in terms of revenue of IT services. Many companies can’t even do something as fundamental as that. It can be really hard to drive alignment, unless you’ve got business-IT alignment ahead of time.

I have said this so many times. The technology is a manageable problem, Layers 1 through 7, including management software to a certain degree, have solved problems the most time. Solving the problem of Layer 8 is tough. You can reboot the server, but you can’t reboot a person.
Solving the problem of Layer 8 is tough. You can reboot the server, but you can’t reboot a person.

I always recommend bringing along some sort of management of organizational change function. In our case, we actually have a number of trained organizational psychologists working for us who understand what it takes to get several hundred, sometimes several thousand, people to change the way they behave, and that’s really important. You’ve got to bring the people along with it.

Gardner: I'd like to thank our supporter for this series, HP Software, and remind our audience to carry on the dialogue with Paul Muller through the Discover Performance Group on LinkedIn, and also to follow Raf on his popular blog, Following the White Rabbit.

You can also gain more insights and information on the best of IT performance management at http://www.hp.com/go/discoverperformance.

And you can always access this and other episodes in our HP Discover Performance Podcast Series at hp.com and on iTunes under BriefingsDirect.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.


You may also be interested in:

Monday, December 10, 2012

Multi-device tool architecture from Embarcadero primes pump for accelerated enterprise mobile development for 2013

The modern class of C and C++ tools are workhorses of PC applications development. And Objective-C tools have proven the rapid application development means of choice for native mobile development for iOS and Mac OS X.

So wouldn't it be nice to let the developers with the skills and proficiency in building native applications for the prominent enterprise computing clients of yesteryear (like Windows) gain ease in bringing better apps to all the mobile and fat client types demanded for the foreseeable future?

Embarcadero Technologies thought so, and long enough ago that they began re-architecting their compiler and C++ Builder development architecture in time to now provide write-once, run-natively-anywhere-that-counts benefits. [Disclosure: Embarcadero Technologies is a sponsor of BriefingsDirect podcasts.]
And now is when it really counts, with the advent of Windows 8, growing Mac OS X use and exploding sales of iOS and Android clients.


And now is when it really counts, with the advent of Windows 8, growing Mac OS X use and exploding sales of iOS and Android clients.

Embarcadero on Monday made generally available C++Builder XE3, which allows a common development effort to natively target -- using a new 64-bit compiler -- Windows 8, Mac OS X and Intel (not yet ARM) clients. And coming this summer, the same compiler outputs to run those same apps natively on iOS and Android mobile clients. ARM support comes at end of 2013.

What's more, more of the Embarcadero stable of tools and IDEs will leverage the architecture. So more tools to build more apps once that run on more devices natively. The compiler architecture is extensible to make more tools that make more code more extensible to more platforms. Almost rhymes.

Vision to close chasms

The vision to bridge the long-standing chasm between mobile and full client environments -- never mind the Windows-Mac chasm -- came as Embarcadero acquired the CodeGear technology set from Borland back in 2008. Embarcadero said it immediately set out to build C++Builder XE3 then, to allow one code effort for many more targets.
"The old way of supporting multiple platforms was not practical," said Michael Swindell, Senior Vice President of Marketing and Product Management at Embarcadero in San Francisco. That old way included highly redundant and costly development to target different platforms. The old way forced ISVs and enterprises to make guesses about which clients to target, despite an extremely dynamic market and fast-changing users preferences.

"We needed to re-organize for a multi-client world," said Swindell. He said that ISVs and developers can hedge their bets by using C++Builder XE3 now, with the knowledge the same code will be able to quickly tuned and deployed in Q3 of this year on iOS and Android.
And there are some additional synergies that should appeal to the commercial ISVs.


The common mantra behind Delphi and C++ Builder, as well as any RAD IDE, of course, is to make less code to more work fast. C++Builder XE3 takes that a big step further by applying Embarcadero's agile benefits to a common architecture supporting the major IDEs to deliver cross-client platform development on all the major targets. Full Delphi support on the new C++Builder XE3 underlying architecture comes this spring, with all the Delphi database connectivity and web services support built in.

And there are some additional synergies that should appeal to the commercial ISVs. The C++Builder XE3 architecture is already "app store ready," enabling ease in bringing the apps to Apple and Google app stores. But for enterprises, Embarcadero is also developing synergies between its AppWave capabilities and C++Builder XE3 so that enterprises too can gain a streamlined means to deploy the apps for PCs, Macs and iOS and Android uses from an AppWave app store. Expect that in the fall, said Swindell.

So the net-net on this from my perspective is that Embarcadero has primed the pump for accelerated enterprise mobile development for 2013. And, it's given developers with C and C++ skills the means to build and deploy via app stores mobile apps on-demand, via subscription models, even inside enterprises. It also means that apps can be designed with common logic and requirements and then delivered on multiple devices, so workforces can use those apps anywhere, anytime. Very powerful.

Best of mobile to enterprise

In essence, this brings what we have come to like about consumer and entrainment and web apps -- but to the workplace on all relevant platforms natively -- in a way that's not too complicated, costly or time-consuming.

I'm not seeing that in any comprehensive way from Microsoft, Apple or Google, nor from any PaaS development offerings in the market.
I would expect that PaaS-hungry providers may look to OEM or otherwise license the C++Builder XE3 technology to bring to a cloud deployment model.


And so I would expect that PaaS-hungry providers may look to OEM or otherwise license the C++Builder XE3 technology to bring to a cloud deployment model, and to better cross the PC-Mac divide, and to consolidate new apps development for all uses.

The C and C++ IDE tools and C++Builder XE3 technology, incidentally, need not only run on-premises. Embarcadero is exploring the means to make it all cloud-based, and to make tool clients using HTML5. A hybrid future for such multi-device development can't be too far off.

You may also be interested in:

GigaSpaces survey shows need for tools for fast big data, strong interest in big data in cloud

It's no surprise that most enterprises are now taking big data more seriously. But what might raise an eyebrow is how many organizations say they rely on real-time processing of big data to fuel their business, as well as the number of companies who say they're thinking about taking their big data to the cloud.

These findings come from a recent survey conducted by GigaSpaces, which asked 243 IT executives in various industries about their big data perceptions and plans. GigaSpaces, a provider of end-to-end scaling solutions for distributed application environments and an open platform-as-a-service (PaaS) stack for cloud deployment, conducted the survey online during the fall of 2012.
The first finding shows that enterprises are moving beyond collecting and storing big data and delving deeper.

Among the survey findings:
  • Some 80 percent of respondents said that big-data processing is a mission-critical function


  • More than 70 percent said their business requires processing of big data in fast -- in real time -- either in large volumes, at high velocity, or both


  • Only 20 percent of respondents said they have no plans to move their big data to the cloud, indicating a widespread readiness to consider the option
The first finding shows that enterprises are moving beyond collecting and storing big data and delving deeper. Their businesses require that they process this data in real time as events occur, be they trades on a stock exchange, alerts from security monitors, or location changes from GPS devices.

The second finding demonstrates the need for low latency and high performance in processing big data streams, as these functions are becoming mission critical and delays or dropped data can't be tolerated.

Real-time tools

GigaSpaces, which sponsored the survey, also asked survey respondents what tools they're using to process big data in real time, and here's where a gap is revealed: only 12 percent have adopted real-time event processing tools. According to GigaSpaces, this suggests that most enterprises still have not found the right solution that offers the ability to handle massive data while also providing the required speed.

"Most enterprises haven’t yet adopted these real-time event processing tools, they're managing instead with a combination of a NoSQL data store with a Hadoop processing platform," says Tsipi Erann, marketing communications manager at GigaSpaces. "It's clear that enterprises haven’t yet found the right solution that’s dedicated to real-time processing and also fits into their architecture."

As for moving big data to the cloud, survey respondents seem eager to reap the cost-savings and improved agility offered by this model. Only 20 percent of them said they have no plans to move big data applications to the cloud, while 44 percent have concrete plans or have already started this migration.

Among the 34 percent who said they were unsure about cloud deployments, primary concerns cited were scalability and security.
It's clear that enterprises haven’t yet found the right solution that’s dedicated to real-time processing and also fits into their architecture.


GigaSpaces cross-referenced answers to the question of big data's business importance with answers to the cloud question and came up with this statement: 80 percent of respondents who define their big data applications as mission critical to the business are planning or considering a move to the cloud. The company said it will use findings from this survey to help shape the direction of its offerings.

"We understand the importance of giving customers the right features and will use the input in the creation of such a solution, whether it’s integration with Hadoop or processing or transactional management," says Yaron Parasol, product manager at GigaSpaces.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:

Wednesday, December 5, 2012

Message Bus bets its cloud-native messaging service will improve the art of email delivery


Message Bus has a pedigreed CEO, an impressive list of customers and partners, and technology that makes its cloud-based service highly scalable and resilient, yet the young company's goal is simple: help customers keep their legitimate email messages out of recipients' spam folders.

With Twitter co-founder Jeremy LaTrasse at the helm, Message Bus is navigating the often dark waters of email delivery so that its customers don't have to. The company's Global Delivery Network, launched in mid-November, aims to be to email and mobile messaging what Amazon Web Services are to cloud computing and Dropbox to cloud storage.
Currently, one in five legitimate emails is either blocked or routed to the spam folder.

The service is a cloud-native application, meaning that it's not tied to the underlying infrastructure of a single cloud service provider. Therefore Message Bus can scale and move its customers' workloads across different cloud infrastructures as needed (the company says it currently deploys on Joyent, Amazon Web Services and Rackspace cloud services). This approach avoids the scale limitations of working with a single cloud service provider, as well as the possibility of service disruption if a provider experiences an outage.

But it takes more than the right architecture to provide an effective message delivery service. Message Bus has done extensive relationship building with top ISPs including AOL, Microsoft and Google to understand what they expect from a trusted sender and sticks to those guidelines, resulting in a higher likelihood that legitimate emails make it to the inbox.

"More than 90 percent of all mail worldwide ends up in one of those places; if there’s no trust with those ISPs then the message won’t make it into the box," says LaTrasse. "So we had the idea to build best practices into the network, so everyone who sends through our service follows them. We made the relationships happen, and all our customers benefit, as well as their recipients."

Out of control

Currently, one in five legitimate emails is either blocked or routed to the spam folder, says Message Bus, making it difficult for companies relying on email as a primary driver of revenue and brand recognition to get their message across. What's more, the cost and complexity of launching messaging campaigns across multiple channels (email, mobile and social messaging, etc) is spinning out of control.

Customers of the Global Delivery Network don't need dedicated messaging hardware or personnel; instead they build a virtual SMTP bridge to send their messages across Message Bus' network. This significantly reduces upfront infrastructure costs as well as ongoing staffing, says LaTrasse, and allows customers to focus on the content of the messages, knowing that they'll be delivered in a manner that's effective, secure, and complaint.
If there’s no trust with those ISPs then the message won’t make it into the box.
At the same it unveiled the Global Delivery Network Message Bus launched a free reporting service called Discover that informs customers of email senders who may be abusing their domain name for illicit or unauthorized purposes. And late in November the company announced an enhancement to its service with the deployment of Opscode's Hosted Chef to automate configuration, environment and application management across the multiple cloud infrastructures powering the company's service.

Message Bus lists American Greetings, MyFitnessPal, and Telly among its early users.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn at http://linkd.in/T6trhH.)

You may also be interested in:

 

Thursday, November 29, 2012

New strategies now needed to simplify data backup and protection in complex enterprise IT environments

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Quest Software.

The latest BriefingsDirect IT trends discussion targets enterprise backup, why it’s broken, and how to fix it.

Nowadays the backup of enterprise information and associated data protection are fragmented, complex, and inefficient. But new approaches are helping to simplify the data-protection process, keep costs in check, and improve recovery speed and confidence.

Joining us to share insights on how data protection became such a mess -- and how new techniques are being adopted to gain comprehensive and standard control over the data lifecycle -- are John Maxwell, Vice President of Product Management for Data Protection at Quest Software, now part of Dell, and George Crump, Founder and Lead Analyst at Storage Switzerland, an analyst firm focused on the storage market. The chat is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.  [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why has something seemingly as straightforward as backup become so fragmented and disorganized?

Maxwell: Dana, I think it’s a perfect storm, to use an overused cliché. If you look back 20 years ago, we had heterogeneous environments, but they were much simpler. There were NetWare and UNIX, and there was this new thing called Windows. Virtualization didn’t even really exist. We backed up data to tape, and a lot of data was in terabytes, not petabytes.

Flash forward to 2012, and there’s more heterogeneity than ever. You have stalwart databases like Microsoft SQL Server and Oracle, but then you have new apps being built on MySQL. You now have virtualization, and, in fact, we're at the point this year where we're surpassing the 50 percent mark on the number of servers worldwide that are virtualized.
John Maxwell

Now we're even starting to see people running multiple hypervisors, so it’s not even just one virtualization platform anymore, either. So the environment has gotten bigger, much bigger than we ever thought it could or would. We have numerous customers today that have data measured in petabytes, and we have a lot more applications to deal with.

And last, but not least, we now have more data that’s deemed mission critical, and by mission critical, I mean data that has to be recovered in less than an hour. Surveys 10 years ago showed that in a typical IT environment, 10 percent of the data was mission critical. Today, surveys show that it’s 50 percent and more.

Crump: I would dovetail into what he just mentioned about mission criticality. There are definitely more platforms, and that’s a challenge, but the expectation of the user is just higher. The term I use for it is IT is getting "Facebooked."

High expectations

I've had many IT guys say to me, "One of the common responses I get from my users is, 'My Facebook account is never down.'" So there is this really high expectation on availability, returning data, and things of that nature that probably isn’t really fair, but it’s reality.

One of the reasons that more data is getting classified as mission critical is just that the expectation that everything will be around forever is much higher.

George Crump
The other thing that we forget sometimes is that the backup process, especially a network backup, probably unlike any other, stresses every single component in the infrastructure. You're pulling data off of a local storage device on a server, it’s going through that server CPU and memory, it’s going down a network card, down a network cable, to a switch, to another card, into some sort of storage device, be it disk or tape.

So there are 15 things that happen in a backup and all 15 things have to go flawlessly. If one thing is broken, the backup fails, and, of course, it’s the IT guy’s fault. It’s just a complex environment, and I don’t know of another process that pushes on all aspects of the environment in one fell swoop like backup does.

Gardner: So the stakes are higher, the expectations are higher, the scale and volume and heterogeneity are all increased. What does this mean, John, for those that are tasked with managing this, or trying to get a handle on it as a process, rather than a technology-by-technology approach?

Maxwell: There are two issues here. One, you expect today's storage administrator, or sysadmin, to be a database administrator (DBA), a VMware administrator, a UNIX sysadmin, and a Windows admin. That’s a lot of responsibility, but that’s the fact.

A lot of people think that they are going to have as deep level of knowledge on how to recover a Windows server as they would an Oracle database. That’s just not the case, and it's the same thing from a product perspective, from a technology perspective.
Is there really such thing as a backup product, the Swiss Army knife, that does the best of everything? Probably not.

Is there really such thing as a backup product, the Swiss Army knife, that does the best of everything? Probably not, because being the best of everything means different things to different accounts. It means one thing for the small to medium-size business (SMB), and it could mean something altogether different for the enterprise.

We've now gotten into a situation where we have the typical IT environment using multiple backup products that, in most cases, have nothing in common. They have a lot of hands in the pot trying to manage data protection and restore data, and it has become a tangled mess.

Gardner: Before we dive a little bit deeper into some of these major areas, I'd like to just visit another issue that’s very top of mind for many organizations, and that’s security, compliance, and business continuity types of issues, risk mitigation issues. George Crump, how important is that to consider, when you look at taking more of a comprehensive or a holistic view of this backup and data-protection issue?

Disclosure laws

Crump: It's a really critical issue, and there are two ramifications. Probably the one that strikes fear in the heart of every CEO on the planet is all the disclosure laws that exist now that say that, when you lose a customer’s data, you have to let him know. Unfortunately, probably the only effective way to do that is to let everybody know.

I'm sure everybody listening to this podcast has gotten more than one letter already this year saying their Social Security number has been exposed, things like that. I can think of three or four I've already gotten this year.

So there is the downside of legally having to admit you made a mistake, and then there is the legal requirements of retaining information in case of a lawsuit. The traditional thing was that if I got a discovery motion filed against me, I needed to be able to pull this information back, and that was one motivator. But the bigger motivator is having to disclose that we did lose data.

And there's a new one coming in. We're hearing about big data, analytics, and things like that. All of that is based on being able to access old information in some form, pull it back from something, and be able to analyze it.

That is leading many, many organizations to not delete anything. If you don't delete anything, how do you store it? A disk-only type of solution forever, as an example, is a pretty expensive solution. I know disk has gotten a lot cheaper, but forever, that’s a really long time to keep the lights on, so to speak.
We need to step back, take inventory of what we've got, and choose the right solution to solve the problem at hand, whether you're an SMB or an enterprise.

Gardner: Let's look at this a bit more from the problem-solution perspective. We have multiple platforms, we have operating systems, hypervisors, application types, even appliances. What's the solution?

Maxwell: The problem is we need to step back, take inventory of what we've got, and choose the right solution to solve the problem at hand, whether you're an SMB or an enterprise.

But the biggest thing we have to address is, with the amount and complexity of the data, how can we make sysadmins, storage administrators, and DBAs productive, and how can we get them all on the same page? Why do each one of these roles in IT have to use different products?

George and I were talking earlier. One of the things that he brought up was that in a lot of companies, data is getting backed up over and over by the DBA, the VMware administrator, and the storage administrator, which is really inefficient. We have to look at a holistic approach, and that may not be one-size-fits-all. It may be choosing the right solutions, yet providing a centered means for administration, reporting, monitoring, etc.

Gardner: Is there anything different and specific about backup that makes this even harder to move from that point solution, best-of-breed mentality, into more of a comprehensive process standardization approach?

Demands and requirements

Crump: It really ties into what John said. Every line of business is going to have its own demands and requirements. To expect not even a backup administrator, but an Oracle administrator that’s managing an Oracle database for a line of business, to understand the nuances of that business and how they want to keep things is a lot to ask.

When backup is broken, the default survival mechanism is to throw everything out, buy the latest enterprise solution, put the stake in the ground, and force everybody to centralize on that one item. That works to a degree, but in every project we've been involved with, there are always three or four exceptions. That means it really didn’t work. You didn't really centralize.

Then there are covert operations of backups happening, where people are backing up data and not telling anybody, because they still don't trust the enterprise application. Eventually, something new comes out. The most immediate example is virtualization, which spawned the birth of several different virtualized specific applications. So bringing all that back in again becomes very difficult.

I agree with John. What you need to do is give the users the tools they want. Users are too sophisticated now for you to say, "This is where we are going to back it up and you've got to live with it." They're just not going to put up with that anymore. It won't work.

So give them the tools that they want. Centralize the process, but not the actual software. I think that's really the way to go.

Gardner: So we recognize that one size fits all probably isn’t going to apply here. We're going to have multiple point solutions. That means integration at some level or multiple levels. That brings us to our next major topic. How do we integrate well without compounding the complexity and the problems set? John?
We’re keenly interested in leveraging those technologies for the DBAs and sysadmins in ways that make their lives easier and make sure they are more productive.

Maxwell: We've been working on this now for almost two years here at Quest, and now at Dell, and we are launching in November, something called NetVault XA. “XA” stands for Extended Architecture. We have a portfolio of very rich products that span the SMBs and the enterprise, with focus on virtual backup, heterogeneous backup, instantaneous snapshots and deep application recovery, and we’re keenly interested in leveraging those technologies for the DBAs and sysadmins in ways that make their lives easier and make sure they are more productive.

NetVault XA solves some really big issues. First of all, it unifies the user experience across products, and by user, I mean the sysadmin, the DBA, and the storage administrator, across products. The initial release of NetVault XA will support both our vRanger and NetVault Backup, as well as our NetVault SmartDisk product, and next year, we'll be adding even more of our products under NetVault XA as well.

So now we've provided a common means of administration. We have one UI. You don’t have to learn something different. Everyone can work on the same product, yet based on your login ID, you will have access to different things, whether it's data or capabilities, such as restoring an Oracle or SQL Server database, or restoring a virtual machine (VM).

That's a common UI. A lot of vendors right now have a lot of solutions, but they look like they're from three, four, or five different companies. We want to provide a singular user experience, but that's just really the icing on the cake with NetVault XA.

If we go down a little deeper into NetVault XA, once it’s is installed, learning alongside vRanger, NetVault, or both, it's going to self identify that vRanger or NetVault environment, and it's going to allow you to manage it the way that you have already set about from that ability.

New approach

We're really delivering a new approach here, one we think is going to be unique in the industry. That's the ability to logically group data and applications within lines of business.

You gave an example earlier of Oracle. Oracle is not an application. Oracle is a platform for applications, and sometimes applications span databases, file systems, and multiple servers. You need to be looking at that from a holistic level, meaning what makes up application A, what makes up application B, C, D, etc.?

Then, what are the service levels for those applications? How mission critical are they? Are they in that 50 percent of data that we've seen from surveys, or are they data that we restored from a week ago? It wouldn’t matter, but then, again, it's having one tool that everyone can use. So you now have a whole different user experience and you're taking up a whole different approach to data protection.

Gardner: There really seems to be a drilling down into these technologies and surfacing information to such a degree that it strikes me as similar to what IT service management (ITSM) did for managing IT systems at a higher level. We're now bringing that to a discrete portion backup and recovery. Does that sound about right, George, or did I overstate it?
We're really delivering a new approach here, one we think is going to be unique in the industry. That's the ability to logically group data and applications within lines of business.

Crump: No, that's dead-on. The benefits of that type of architecture are going to be substantial. Imagine if you are the vRanger programmer, when all this started. Instead of having to write half of the backend, you could just plug into a framework that already existed and then focus most of your attention on the particular application or environment that you are going to protect.

You can be releasing the equivalent of vRanger 6 on vRanger 1, because you wouldn’t have to go write this backend that already existed. Also, if you think about it, you end up with a much more reliable software product, because now you're building on a library class that will have been well tested and proven.

Say you want to implement deduplication in a new version of the product or a new product. Instead of having to rewrite your own deduplication engine, just leverage the engine that's already there.

One common means

Maxwell: By having one common means -- whether you're a DBA, a sysadmin, a VMware administrator, or a storage administrator -- you are all on the same page. You can have people all buying into one way of doing things, so we don't have this data being backed up two or three times.

But the other thing that you get, and this is a big issue now, is protecting multiple sites. When we talk about multiple sites, people sometimes say, "You mean multiple data centers. What about all those remote office branch offices?" That right now is a big issue that we see customers running into.

The beauty of NetVault XA is I can now have various solutions implemented, whether it's vRanger running remotely or NetVault in a branch office, and I can be managing it. I can manage all aspects of it to make sure that those backups are running properly, or make sure replication is working properly. It could be halfway around the country or halfway around the world, and this way we have consistency.

Speaking of reporting, as you said earlier, what about a dashboard for management? One of our early users of NetVault XA is a large multinational company with 18 data centers and 250,000 servers. They have had to dedicate people to write service-level reports for their backups. Now, with NetVault XA, they can literally give their IT management, meaning their CIO and their CTOs, login IDs to NetVault XA, and they can see a dashboard that’s been color coded.

It can say, "Well, everything is green, so everything is protected," whether it's the Linux servers, Oracle databases, Exchange email, whatever the case. So by being able to reduce that level of complexity into a single pane of glass -- I know it's a cliché, but it really is -- it's really very powerful for large organizations and small.
I can manage all aspects of it to make sure that those backups are running properly, or make sure replication is working properly.

Even if you have two or three locations and you're only 500 employees, wouldn’t it be nice to have the ability to look at your backups, your replicas, and your snapshots, whether they're in the data center or in branch offices, and whether you're a sysadmin, DBA, storage administrator, to be using one common interface and one common set of rules to all basically all get on the same plane?

Dispersed operations

So it's having a means to take an inventory and ensure that the servers are being maintained, that everything is being protected, because next to your employees, your data is the most important asset that you have.

Data is everywhere now. It’s in mobile devices. It certainly could be in cloud-based apps. That's one of the things that we didn’t talk about. At Quest we use seven software-as-a-service (SaaS)-based applications, meaning they're big parts, whether it's Salesforce.com or our helpdesk systems, or even Office 365. This is mission-critical corporate data that doesn’t run in our own data center. How am I protecting that? Am I even cognizant of it?

The cloud has made things even more interesting, just as virtualization has made it more interesting over the past couple of years. With NetVault XA, we give you that one single pane of glass with which you can report, analyze, and manage all of your data.

Mobile devices

Gardner: Just to be clear John, this console is something you can view as a web interface, and I'm assuming therefore also through mobile devices. I'm going to guess that at some point, there will perhaps be even a more native application for some of the prominent mobile platforms.

Maxwell: It’s funny that you mentioned that. This is an HTML5-based application. So it's very new, very fresh, and very graphical. If you look at the UI, it was designed with tablets and laptops in mind. It's gotten to where you can do controls with your thumbs, assuming you're running this on a tablet.

In-house, and with early support customers, you can log into this remotely via laptops, or tablet computing. We even have some people using them on mobile phones, even though we're not quite there yet. I'm talking about the form factor of how the screens light up, but we will definitely be going that way. So a sysadmin or storage administrator can have at their fingertips the status of what’s going on in the data-protection environment.

What's nice is because this is a thin client, a web UI, you can define user IDs not only for the sysadmins and DBAs and storage administrators, but like I said earlier, IT management.

So if your boss, or your boss’ boss, wants to dial in and see the health of things, how much data you’re protecting, how much data is being replicated, what data is being protected up in the cloud, which is on-prem, all of that sort of stuff, they can now have a dashboard approach to seeing it all. That’s going to make everyone more productive, and it's going to give them a better sense that this data is being protected, and they can sleep at night.
If you don’t have a way to manage and see all of your data protection assets, it's really just a lot of talk.

Gardner: Is there anything here going forward that will make having a process approach to a data lifecycle and backup and recovery even more important?

Maxwell: Dana, you hit on something that's really near and dear to my heart, which is data deduplication. We have a very broad strategy. We offer our own software-based dedupe. We support every major hardware based dedupe appliance out there, and we're now adding support for Dell’s DR Series, DR4000 dedupe appliances. But we're still very much committed to tape, and we're building initiatives based on storing data in the cloud and backing up, replicating, failover, and so forth.

One of the things that we built into NetVault XA that's separate from the policy management and online monitoring is that we now have historical data. This is going to give you the ability to do some capacity management and capacity planning and see what the utilization is.

How much storage are your backups taking? What's the most optimum number of generations? Where are you keeping that data? Is some data being kept too long? Is some data not being kept long enough?
For every ounce of flexibility, it feels like we have added two ounces of complexity, and it's something we just can't afford to deal with.

By offering a broad strategy that says we support a plethora of backup targets, whether it's tape, special-purpose backup appliances, software-based dedupe, or even the cloud, we're giving customers flexibility, because they have unique needs and they have different needs, based on service levels or budgets. We want to make them flexible, because, going back to our original discussion, one size doesn’t fit all.

Crump: Just to tie in with what John said, we need flexibility that doesn’t add complexity. Almost everything we've done so far in the environment up to now, has added flexibility, but also, for every ounce of flexibility, it feels like we have added two ounces of complexity, and it's something we just can't afford to deal with. So that's really the key thing.

Looking forward, at least on the horizon, I don't see a big shift, something like virtualization that we need to be overly concerned with. What I do see is the virtual environment becoming more and more challenging, as we stack more and more VMs on it. The amount of I/O and the amount of data protection process that will surround every host is going to continue to increase. So the time is now to really get the bull by the horns and institute a process that will scale with the business long-term.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Quest Software.

You may also be interested in: 

Wednesday, November 28, 2012

HP BSM software newly harnesses big-data analysis to better predict, prevent, and respond to IT issues

HP this week announced a new version of its HP Business Service Management (BSM) software to endow IT organizations with big data analysis capabilities across mobile, hybrid, and cloud IT environments.The goal: To significantly improve the performance and availability of software services.

As organizations have adopted virtualization and cloud technologies, the complexity to effectively monitor trouble across these systems has skyrocketed. And, with the rise of shared services, IT no longer knows or controls all the technologies supporting their businesses.

So HP has broadened its BSM solutions to deliver better end-to-end visibility into IT applications and services  by exploiting powerful, real-time and historical analytics. With enhanced BSM, IT can anticipate performance and trouble issues before they happen. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

“IT organizations are looking for new ways to deliver predictable service levels," said Ajei Gopal, senior vice president and general manager, Hybrid and Cloud Products, Software at HP. “The new HP Business Service Management software delivers end-to-end operational intelligence to help IT make better decisions and improve service levels in complex, dynamic IT environments.”

Operational analytics

New to HP BSM is HP Operational Analytics (OpsAnalytics), a capability that delivers ongoing intelligence about the health of IT services by automating the correlation and analysis of consolidated data, including reams of machine data, logs, events, topology, and performance information.

OpsAnalytics is enabled through the integration of HP ArcSight Logger, a universal log management solution, with correlation capabilities of HP Operations Manager i (OMi), and the predictive analytics of HP Service Health Analyzer (SHA). This combination delivers deep visibility and insight into nearly any performance or availability issue, so, says HP, IT operators can:
  • Remediate known problems before they occur with predictive analytics that forecast problems and prioritize issues based on business impact
  • Proactively solve unknown issues by collecting, storing, and analyzing IT operational data to automatically correlate service abnormalities with the problem source
  • Resolve incidents faster with knowledge based on historical analysis of prior similar events that contains search capabilities across logs and events.
HP BSM further helps clients maximize IT investments with end-to-end visibility across heterogeneous environments, enabling clients to:
  • Ensure service availability with a 360-degree view of IT performance, gathered by aggregating data from disparate sources into a single dashboard using out-of-the-box connectors to a range of management frameworks, including IBM Tivoli Enterprise Console and IBM Tivoli Monitoring and Microsoft System Center
  • Resolve and improve performance of applications running in OpenStack and Python cloud environments with diagnostics that pinpoint performance bottlenecks
  • Improve availability of web and mobile applications through greater insight into client side performance issues.
HP also enables virtualization administrators to diagnose and troubleshoot performance bottlenecks in highly virtualized environments with HP Virtualization Performance Viewer (vPV), which helps reduce operational resources by up to 70 percent and decrease time to problem resolution by up to 50 percent, and is available as a free download, said HP.

The free versions of HP Virtualization Performance Viewer (vPV) and HP ArcSight Logger are available to download from www.hp.com/go/vpv and www.hp.com/go/opsanalytics respectively.

You may also be interested in: