Tuesday, January 29, 2013

AT&T cloud services built on VMware vCloud Datacenter meet evolving business demands for advanced IaaS

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next BriefingsDirect IT leadership discussion focuses on how global telecommunications giant AT&T has created advanced cloud services for its business customers. We'll see how AT&T has developed the ability to provide virtual private clouds and other computing capabilities as integrated services at scale.

To learn more about implementing cloud technology to deliver and commercialize an adaptive and reliable cloud services ecosystem, we sat down with Chris Costello, Assistant Vice President of AT&T Cloud Services. The interview was conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why are business cloud services such an important initiative for you?

Costello: AT&T has been in the hosting business for over 15 years, and so it was only a natural extension for us to get into the cloud services business to evolve with customers' changing business demands and technology needs.

Chris Costello
We have cloud services in several areas. The first is our AT&T Synaptic Compute as a Service. This is a hybrid cloud that allows VMware clients to extend their private clouds into AT&T's network-based cloud using a virtual private network (VPN). And it melds the security and performance of VPN with the economics and flexibility of a public cloud. So the service is optimized for VMware's more than 350,000 clients.

If you look at customers who have internal clouds today or private data centers, they like the control, the security, and the leverage that they have, but they really want the best of both worlds. There are certain workloads where they want to burst into a service provider’s cloud.

We give them that flexibility, agility, and control, where they can simply point and click, using free downloadable tools from VMware, to instantly turn up workloads into AT&T's cloud.

Another capability that we have in this space is AT&T Platform as a Service. This is targeted primarily to independent software vendors (ISVs), IT leaders, and line-of-business managers. It allows customers to choose from 50 pre-built applications, instantly mobilize those applications, and run them in AT&T's cloud, all without having to write a single line of code.

So we're really starting to get into more of the informal buyers, those line-of-business managers, and IT managers who don't have the budget to build it all themselves, or don't have the budget to buy expensive software licenses for certain application environments.

Examples of some of the applications that we support with our platform as a service (PaaS) are things like salesforce automation, quote and proposal tools, and budget management tools.

Storage space

The third key category of AT&T's Cloud Services is in the storage space. We have our AT&T Synaptic Storage as a Service, and this gives customers control over storage, distribution, and retrieval of their data, on the go, using any web-enabled device. In a little bit, I can get into some detail on use cases of how customers are using our cloud services.

This is a very important initiative for AT&T. We're seeing customer demand of all shapes and sizes. We have a sizable business and effort supporting our small- to medium-sized business (SMB) customers, and we have capabilities that we have tailor-developed just to reach those markets.

As an example, in SMB, it's all about the bundle. It's all about simplicity. It's all about on demand. And it's all about pay per use and having a service provider they can trust.
It's all about simplicity. It's all about on demand.

In the enterprise space, you really start getting into detailed discussions around security. You also start getting into discussions with many customers who already have private networking solutions from AT&T that they trust. When you start talking with clients around the fact that they can run a workload, turn up a server in the cloud, behind their firewall, it really resonates with CIOs that we're speaking with in the enterprise space.

Also in enterprises, it's about having a globally consistent experience. So as these customers are reaching new markets, it's all about not having to stand up an additional data center, compute instance, or what have you, and having a very consistent experience, no matter where they do business, anywhere in the world.

New era for women in tech

Gardner: The fact is that a significant majority of CIOs and IT executives are men, and that’s been the case for quite some time. But I'm curious, does cloud computing and the accompanying shift towards IT becoming more of a services brokering role change that? Do you think that with the consensus building among businesses and partner groups being more important in that brokering role, this might bring in a new era for women in tech?

Costello: I think it is a new era for women in tech. Specifically to my experience in working at AT&T in technology, this company has really provided me with an opportunity to grow both personally and professionally.

I currently lead our Cloud Office at AT&T and, prior to that, ran AT&T’s global managed hosting business across our 38 data centers. I was also lucky enough to be chosen as one of the top women in wireline services.
The key to success of being a woman working in technology is being able to build offers that solve customers' business problem.

What drives me as a woman in technology is that I enjoy the challenge of creating offers that meet customer needs, whether they be in the cloud space, things like driving eCommerce, high performance computing environment, or disaster recovery (DR) solutions.

I love spending time with customers. That’s my favorite thing to do. I also like to interact with many partners and vendors that I work with to stay current on trends and technologies. The key to success of being a woman working in technology is being able to build offers that solve customers' business problem, number one.

Number two is being able to then articulate the value of a lot of the complexity around some of these solutions, and package the value in a way that’s very simple for customers to understand.

Some of the challenge and also opportunity of the future is that, as technology continues to evolve, it’s about reducing complexity for customers and making the service experience seamless. The trend is to deliver more and more finished services, versus complex infrastructure solutions.

I've had the opportunity to interact with many women in leadership, whether they be my peer group, managers that work as a part of my team, and/or mentors that I have within AT&T that are senior leaders within the business.

I also mentor three women at AT&T, whether they be in technology, sales, or an operations role. So I'm starting to see this trend continue to grow.
It enables us to deliver a mobile cloud as well. That helps customers to transform their businesses.

Gardner: You have a lot of customers who are already using your business network services. I imagine there are probably some good cost-efficiencies in moving them to cloud services as well.

Costello: Absolutely. We've embedded cloud capabilities into the AT&T managed network. It enables us to deliver a mobile cloud as well. That helps customers to transform their businesses. We're delivering cloud services in the same manner as voice and data services, intelligently routed across our highly secure, reliable network.

AT&T's cloud is not sitting on top of or attached to our network, but it's fully integrated to provide customers a seamless, highly secure, low-latency, and high-performing experience.

Gardner: Why did you chose VMware and vCloud Datacenter Services as a core to the AT&T Synaptic Compute as a Service?

Multiple uses

Costello: AT&T uses VMware in several of our hosting application and cloud solutions today. In the case of AT&T Synaptic Compute as a Service, we use that in several ways, both to serve customers in public cloud and hybrid, as well as private cloud solutions.

We've also been using VMware technology for a number of years in AT&T’s Synaptic Hosting offer, which is our enterprise-grade utility computing service. We've also been serving customers with server virtualization solutions available in AT&T data centers around the world and also can be extended into customer or third-party locations.

Just to drill down on some of the key differentiators of AT&T Synaptic Compute as a Service, it’s two-fold.

One is that we integrate with AT&T private networking solutions. Some of the benefits that customers enjoy as a result of that are orchestration of resources, where we'll take the amount of compute storage and networking resources and provide the exact amount of resources at the exact right time to customers on-demand.

Our solutions offer enterprise-grade security. The fact that we've integrated our AT&T Synaptic Compute as a Service with private networking solution allows customers to extend their cloud into our network using VPN.
An engineering firm can now perform complex mathematical computations and extend from their private cloud into AT&T’s hybrid solution instantaneously, using their native VMware toolset.

Let me touch upon VMware vCloud Datacenter Services for a minute. We think that’s another key differentiator for us, in that we can allow clients to seamlessly move workloads to our cloud using native VMware toolsets. Essentially, we're taking technical complexity and interoperability challenges off the table.

With the vCloud Datacenter program that we are part of with VMware, we're letting customers have access to copy and paste workloads and to see all of their virtual machines, whether it be in their own private cloud environment or in a hybrid solution provided by AT&T. Providing that seamless access to view all of their virtual machines and manage those through single interface is key in reducing technical complexity and speeding time to market.

We've been doing business with VMware for a number of years. We also have a utility-computing platform called AT&T Synaptic Hosting. We learned early on, in working with customers’ managed utility computing environments, that VMware was the virtualization tool of choice for many of our enterprise customers.

As technologies evolved over time and cloud technologies have become more prevalent, it was absolutely paramount for us to pick a virtualization partner that was going to provide the global scale that we needed to serve our enterprise customers, and to be able to handle the large amount of volume that we receive, given the fact that we have been in the hosting business for over 15 years.

As a natural extension of our Synaptic Hosting relationship with VMware for many years, it only made sense that we joined the VMware vCloud Datacenter program. VMware is baked into our Synaptic Compute as a Service capability. And it really lets customers have a simplified hybrid cloud experience. In five simple steps, customers can move workloads from their private environment into AT&T's cloud environment.

Think that you are the IT manager and you are coming into start your workday. All of a sudden, you hit 85 percent utilization in your environment, but you want to very easily access additional resources from AT&T. You can use the same console that you use to perform your daily job for the data center that you run in-house.

In five clicks, you're viewing your in-house private-cloud resources that are VMware based and your AT&T virtual machines (VMs) running in AT&T's cloud, our Synaptic Compute as a Service capability. That all happens in minutes' time.

Gardner: I should also think that the concepts around the software-defined datacenter and software-defined networking play a part in this. Is that something you are focused on?
If we start with enterprise, the security aspects of the solution had to prove out for the customers that we do business with.

Costello: Software-defined datacenter and software-defined networks are essentially what we're talking about here with some uniqueness that AT&T Labs has built within our networking solutions. We essentially take our edge, our edge routers, and the benefits that are associated with AT&T networking solutions around redundancy, quality of service, etc., and extend that into cloud solutions, so customers can extend their cloud into our network using VPN solutions.

Added efficiency

Previously many customers would have to buy a router and try to pull together a solution on their own. It can be costly and time consuming. There's a whole lot of efficiency that comes with having a service provider being able to manage your compute storage and networking capabilities end to end.

Global scale was also very critical to the customers who we've been talking to. The fact that AT&T has localized and distributed resources through a combination of our 38 data centers around the world, as well as central offices, makes it very attractive to do business with AT&T as a service provider.

Gardner: We've certainly seen a lot of interest in hybrid cloud. Is that one of the more popular use cases?
We see a lot of customers looking for a more efficient way to be able to have business continuity,  have the ability to fail over in the event of a disaster

Costello: I speak with a lot of customers who are looking to be able to virtually expand. They have data-center, systems, and application investments and they have global headquarters locations, but they don't want to have to stand up another data center and/or virtually expand and/or ship staff out to other location. So certainly one use case that's very popular with customers is, "I can expand my virtual data-center environment and use AT&T as a service provider to help me to do that."

Another use case that's very popular with our customers is disaster recovery. We see a lot of customers looking for a more efficient way to be able to have business continuity,  have the ability to fail over in the event of a disaster, and also get in and test their plans more frequently than they're doing today.

For many of the solutions that are in place today, clients are saying they are expensive and/or they're just not meeting their service-level agreements (SLAs) to their business unit. One of the solutions that we recently put in place for a client is that we put them in two of AT&T's geographically diverse data centers. We wrapped it with AT&T's private-networking capability and then we solutioned our AT&T Synaptic Compute as a Service and Storage as a Service.

The customer ended up with a better SLA and a very powerful return on investment (ROI) as well, because they're only paying for the cloud resources when the meter is running. They now have a stable environment so that they can get in and test their plans as often as they'd like to and they're only paying for a very small storage fee in the event that they actually need to invoke in the event of a disaster. So DR plans are very popular.

Another use case that’s very popular among our clients is short-term compute. We work with a lot of customers who have massive mathematical calculations and they do a lot of number crunching.

Finally, in the compute space, we're seeing a lot of customers start to hang virtual desktop solutions off of their compute environment. In the past, when I would ask clients about virtual desktop infrastructure (VDI), they'd say, "We're looking at it, but we're not sure. It hasn’t made the budget list." All of a sudden, it’s becoming one of the most highly requested use cases from customers, and AT&T has solutions to cover all those needs.
The fact that we have 38 data centers around the world, a global reach from a networking perspective, and all the foundational cloud capabilities makes a whole lot of sense.
 
Gardner: Do you think that this will extend to some of the big data and analytics crunching that we've heard about?

Costello: I don’t think anyone is in a better position than AT&T to be able to help customers to manage their massive amounts of data, given the fact that a lot of this data has to reside on very strong networking solutions. The fact that we have 38 data centers around the world, a global reach from a networking perspective, and all the foundational cloud capabilities makes a whole lot of sense.

Speaking about this type of a "bursty" use case, we host some of the largest brand name retailers in the world. When you think about it, a lot of these retailers are preparing for the holidays, and their servers are going underutilized much of year. So how attractive is it to be able to look at AT&T, as a service provider, to provide them robust SLAs and a platform that they only have to pay for when they need to utilize it, versus sitting and going very underutilized much of the year?

We also host many online gaming customers. When you think about the gamers that are out there, there is a big land rush when the buzz occurs right before the launch of a new game. We work very proactively with those gaming customers to help them size their networking needs well in advance of a launch. Also we'll monitor it in real time to ensure that those gamers have a very positive experience when that launch does occur.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Convercent's cloud app aims to help employees implement, measure, and rate corporate values and culture

Convercent, a new company that launched today, aims to fill a void in the governance, risk, and compliance (GRC) market with approachable tools that help employees implement corporate values while offering ways to measure and rate such contributions.

GRC has traditionally provided companies with tools to help customers meet government and industry regulations, enforce corporate policies and better deal with risk. Yet the areas of corporate culture and values -- which are becoming increasing important in today’s business climate -- were rarely addressed.

“Our platform allows companies to take the fuzzy-wuzzy of ethics –- to take the sign off the wall -- and turn it into structured data to measure employee actions against the organization’s stated values and culture,” said Patrick Quinlan, Convercent's CEO.
Companies like ours spend 85 percent of every dollar on people. If we don’t drive an effective culture, how can we drive performance?

Launching with 52 employees, $10.2 million in venture funding, the start-up hopes to capitalize on the recent trend of companies using their core ethics, culture and values to drive their business model. Convercent’s founders had earlier invested in a compliance software maker called Business Controls Inc. and is converting the 300 customers from that venture to Convercent.

Whole Foods is an example of such a company –- the grocery store’s brand makes a promise of quality and safety to its customers. If that promise is broken, the brand is damaged and the results could be far more devastating than a regulatory fine, said Quinlan.

Plenty of companies are making corporate values a top priority today, according to Quinlan, who offers the example of Google’s "Don't be evil" credo. Ensuring that employees walk the walk of the company’s ethics is becoming as important as making sure they abide by more traditional corporate or regulatory guidelines.

“Companies like ours spend 85 percent of every dollar on people,” said Quinlan. “If we don’t drive an effective culture, how can we drive performance?”

Cloud application

Convercent integrates corporate values and more traditional GRC activities into a cloud application that features mobile access and a clean user interface. For example, employees can use the application to read a definition of what their company considers community service, see examples of such activities, and log hours spent in service.

Aimed at legal, audit and compliance executives, the tool can also be used to distribute company policies, stay compliant with regulations, educate employees, and align company performance with culture, said CIO Philip Winterburn. It offers a way for companies to report and respond to incidents and craft escalations, investigations and resolutions.
It offers a way for companies to report and respond to incidents and craft escalations, investigations and resolutions.

Managers can receive reports on employee or department engagement and generate related scores, turning the ambiguous area of company involvement into a mathematical measurement, according to Winterburn.

The product is available at launch in more than 40 languages and can also produce on-the-fly translations. Mobile access from iOS devices is included at launch, with plans for support of Android devices to follow.

Founders Quinlan, Winterburn, and Barclay Friesen hail from compliance software maker Rivet Software. The three also founded Nebbiolo Ventures, which they describe as an entrepreneurial venture. A video of the Convercent product launch can be found here.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:

Monday, January 28, 2013

Ford scours for more big data to bolster quality, improve manufacturing, streamline processes

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

Ford has exploited the strengths of big data analytics by directing them internally to improve business results. In doing so, they scour the metrics from the company’s best processes across myriad manufacturing efforts and through detailed outputs from in-use automobiles -- all to improve and help transform their business.

So explains Michael Cavaretta, PhD, Technical Leader of Predictive Analytics for Ford Research and Advanced Engineering in Dearborn, Michigan. Cavaretta is one of a group of experts assembled this week for The Open Group Conference in Newport Beach, California.

Cavaretta has led multiple data-analytic projects at Ford to break down silos inside the company to best define Ford’s most fruitful data sets. Ford has successfully aggregated customer feedback, and extracted all the internal data to predict how best new features in technologies will improve their cars.

As a contributor to the The Open Group conference and its focus on "Big Data -- The Transformation We Need to Embrace Today," Cavaretta explains how big data is fostering business transformation by allowing deeper insights into more types of data efficiently, and thereby improving processes, quality control, and customer satisfaction.

The interview was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What's different now in being able to get at this data and do this type of analysis from five years ago?

Cavaretta: The biggest difference has to do with the cheap availability of storage and processing power, where a few years ago people were very much concentrated on filtering down the datasets that were being stored for long-term analysis. There has been a big sea change with the idea that we should just store as much as we can and take advantage of that storage to improve business processes.

Gardner: How did we get here? What's the process behind the benefits?

Sea change in attitude

Cavaretta: The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises. There's this idea that you don't need to spend so much time figuring out what data you want to store and worry about the cost associated with it, and more about data as an asset. There is value in being able to store it, and being able to go back and extract different insights from it. This really comes from this really cheap storage, access to parallel processing machines, and great software.
Cavaretta

I like to talk to people about the possibility that big data provides and I always tell them that I have yet to have a circumstance where somebody is giving me too much data. You can pull in all this information and then answer a variety of questions, because you don't have to worry that something has been thrown out. You have everything.

You may have 100 questions, and each one of the questions uses a very small portion of the data. Those questions may use different portions of the data, a very small piece, but they're all different. If you go in thinking, "We’re going to answer the top 20 questions and we’re just going to hold data for that," that leaves so much on the table, and you don't get any value out of it.
The process behind the benefits has to do with a sea change in the attitude of organizations, particularly IT within large enterprises.

We're a big believer in mash-ups and we really believe that there is a lot of value in being able to take even datasets that are not specifically big-data sizes yet, and then not go deep, not get more detailed information, but expand the breadth. So it's being able to augment it with other internal datasets, bridging across different business areas as well as augmenting it with external datasets.
A lot of times you can take something that is maybe a few hundred thousand records or a few million records, and then by the time you’re joining it, and appending different pieces of information onto it, you can get the big dataset sizes.

Gardner: You’re really looking primarily at internal data, while also availing yourself of what external data might be appropriate. Maybe you could describe a little bit about your organization, what you do, and why this internal focus is so important for you.

Internal consultants

Cavaretta: I'm part of a larger department that is housed over in the research and advanced-engineering area at Ford Motor Company, and we’re about 30 people. We work as internal consultants, kind of like Capgemini or Ernst & Young, but only within Ford Motor Company. We’re responsible for going out and looking for different opportunities from the business perspective to bring advanced technologies. So, we’ve been focused on the area of statistical modeling and machine learning for I’d say about 15 years or so.

And in this time, we’ve had a number of engagements where we’ve talked with different business customers, and people have said, "We'd really like to do this." Then, we'd look at the datasets that they have, and say, "Wouldn’t it be great if we would have had this. So now we have to wait six months or a year."

These new technologies are really changing the game from that perspective. We can turn on the complete fire-hose, and then say that we don't have to worry about that anymore. Everything is coming in. We can record it all. We don't have to worry about if the data doesn’t support this analysis, because it's all there. That's really a big benefit of big-data technologies.

The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts who are really interested in looking at things from a data-driven perspective. From when I first came in to now, the biggest change has been when Alan Mulally came into the company, and really pushed the idea of data-driven decisions.
The real value proposition definitely is changing as things are being pushed down in the company to lower-level analysts.

Before, we were getting a lot of interest from people who are really very focused on the data that they had internally. After that, they had a lot of questions from their management and from upper level directors and vice-president saying, "We’ve got all these data assets. We should be getting more out of them." This strategic perspective has really changed a lot of what we’ve done in the last few years.

Gardner: Are we getting to the point where this sort of Holy Grail notion of a total feedback loop across the lifecycle of a major product like an automobile is really within our grasp? Are we getting there, or is this still kind of theoretical. Can we pull it altogether and make it a science?

Cavaretta: The theory is there. The question has more to do with the actual implementation and the practicality of it. We still are talking a lot of data where even with new advanced technologies and techniques that’s a lot of data to store, it’s a lot of data to analyze, there’s a lot of data to make sure that we can mash-up appropriately.

And, while I think the potential is there and I think the theory is there. There is also a work in being able to get the data from multiple sources. So everything which you can get back from the vehicle, fantastic. Now if you marry that up with internal data, is it survey data, is it manufacturing data, is it quality data? What are the things do you want to go after first? We can’t do everything all at the same time.

Highest value

Our perspective has been let’s make sure that we identify the highest value, the greatest ROI areas, and then begin to take some of the major datasets that we have and then push them and get more detail. Mash them up appropriately and really prove up the value for the technologists.

Gardner: Clearly, there's a lot more to come in terms of where we can take this, but I suppose it's useful to have a historic perspective and context as well. I was thinking about some of the early quality gurus like Deming and some of the movement towards quality like Six Sigma. Does this fall within that same lineage? Are we talking about a continuum here over that last 50 or 60 years, or is this something different?

Cavaretta: That’s a really interesting question. From the perspective of analyzing data, using data appropriately, I think there is a really good long history, and Ford has been a big follower of Deming and Six Sigma for a number of years now.

The difference though, is this idea that you don't have to worry so much upfront about getting the data. If you're doing this right, you have the data right there, and this has some great advantages. You’ll have to wait until you get enough history to look for somebody’s patterns. Then again, it also has some disadvantage, which is you’ve got so much data that it’s easy to find things that could be spurious correlations or models that don’t make any sense.

The piece that is required is good domain knowledge, in particular when you are talking about making changes in the manufacturing plant. It's very appropriate to look at things and be able to talk with people who have 20 years of experience to say, "This is what we found in the data. Does this match what your intuition is?" Then, take that extra step.

Gardner: How has the notion of the Internet of things being brought to bear on your gathering of big data and applying it to the analytics in your organization?

Cavaretta: It is a huge area, and not only from the internal process perspective -- RFID tags within the manufacturing plans, as well as out on the plant floor, and then all of the information that’s being generated by the vehicle itself.

The Ford Energi generates about 25 gigabytes of data per hour. So you can imagine selling couple of million vehicles in the near future with that amount of data being generated. There are huge opportunities within that, and there are also some interesting opportunities having to do with opening up some of these systems for third-party developers. OpenXC is an initiative that we have going on to add at Research and Advanced Engineering.

Huge number of sensors

We have a lot of data coming from the vehicle. There’s huge number of sensors and processors that are being added to the vehicles. There's data being generated there, as well as communication between the vehicle and your cell phone and communication between vehicles.

There's a group over at Ann Arbor Michigan, the University of Michigan Transportation Research Institute (UMTRI), that’s investigating that, as well as communication between the vehicle and let’s say a home system. It lets the home know that you're on your way and it’s time to increase the temperature, if it’s winter outside, or cool it at the summer time.

The amount of data that’s been generated there is invaluable information and could be used for a lot of benefits, both from the corporate perspective, as well as just the very nature of the environment.

Gardner: Just to put a stake in the ground on this, how much data do cars typically generate? Do you have a sense of what now is the case, an average?

Cavaretta: The Energi, according to the latest information that I have, generates about 25 gigabytes per hour. Different vehicles are going to generate different amounts, depending on the number of sensors and processors on the vehicle. But the biggest key has to do with not necessarily where we are right now but where we will be in the near future.

With the amount of information that's being generated from the vehicles, a lot of it is just internal stuff. The question is how much information should be sent back for analysis and to find different patterns? That becomes really interesting as you look at external sensors, temperature, humidity. You can know when the windshield wipers go on, and then to be able to take that information, and mash that up with other external data sources too. It's a very interesting domain.
With the amount of information that's being generated from the vehicles, a lot of it is just internal stuff.

Gardner: What skills do you target for your group, and what ways do you think that you can improve on that?

Cavaretta: The skills that we have in our department, in particular on our team, are in the area of computer science, statistics, and some good old-fashioned engineering domain knowledge. We’ve really gone about this from a training perspective. Aside from a few key hires, it's really been an internally developed group.

Targeted training

The biggest advantage that we have is that we can go out and be very targeted with the amount of training that we have. There are such big tools out there, especially in the open-source realm, that we can spin things up with relatively low cost and low risk, and do a number of experiments in the area. That's really the way that we push the technologies forward.

Talking with The Open Group really gives me an opportunity to be able to bring people on board with the idea that you should be looking at a difference in mindset. It's not "Here’s a way that data is being generated, look, try and conceive of some questions that we can use, and we’ll store that too." Let's just take everything, we’ll worry about it later, and then we’ll find the value.

It's important to be thinking about data as an asset, rather than as a cost. You even have to spend some money, and it may be a little bit unsafe without really solid ROI at the beginning. Then, move towards pulling that information in, and being able to store it in a way that allows not just the high-level data scientist to get access to and provide value, but people who are interested in the data overall. Those are very important pieces.

The last one is how do you take a big-data project, how do you take something where you’re not storing in the traditional business intelligence (BI) framework that an enterprise can develop, and then connect that to the BI systems and look at providing value to those mash-ups. Those are really important areas that still need some work.

There are many companies, especially large enterprises, that are looking at their data assets and wondering what can they do to monetize this, not only to just pay for the efficiency improvement but as a new revenue stream.

Gardner: For those organizations that want to get started on this, how do you get started?
Understand that it maybe going to be a little bit more costly and the ROI isn't going to be there at the beginning.

Cavaretta: We're definitely a huge believer in pilot projects and proof of concept, and we like to develop roadmaps by doing. So get out there. Understand that it's going to be messy. Understand that it maybe going to be a little bit more costly and the ROI isn't going to be there at the beginning.

But get your feet wet. Start doing some experiments, and then, as those experiments turn from just experimentation into really providing real business value, that’s the time to start looking at a more formal aspect and more formal IT processes. But you've just got to get going at this point.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Sunday, January 27, 2013

Improving signal-to-noise in risk management

This guest post comes courtesy of Jack Jones, principal of CXOWARE. He is also the author and creator of the Factor Analysis of Information Risk (FAIR) Framework. Jones will be a speaker at The Open Group Conference in Newport Beach, California this week.

By Jack Jones, CXOWARE

One of the most important responsibilities of the information security professional (or any IT professional, for that matter) is to help management make well-informed decisions. Unfortunately, this has been an elusive objective when it comes to risk. Although we’re great at identifying control deficiencies, and we can talk all day long about the various threats we face, we have historically had a poor track record when it comes to risk. There are a number of reasons for this, but in this article I’ll focus on just one -- definition.
Jones

You’ve probably heard the old adage, “You can’t manage what you can’t measure.”  Well, I’d add to that by saying, “You can’t measure what you haven’t defined.” The unfortunate fact is that the information security profession has been inconsistent in how it defines and uses the term “risk.” Ask a number of professionals to define the term, and you will get a variety of definitions. 

Besides inconsistency, another problem regarding the term “risk” is that many of the common definitions don’t fit the information security problem space or simply aren’t practical. For example, the ISO27000 standard defines risk as, “the effect of uncertainty on objectives.” What does that mean? Fortunately (or perhaps unfortunately), I must not be the only one with that reaction because the ISO standard goes on to define “effect,” “uncertainty,” and “objectives,” as follows:
  • Effect: A deviation from the expected -- positive and/or negative
  • Uncertainty: The state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence or likelihood
  • Objectives: Can have different aspects (such as financial, health and safety, information security, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process)
NOTE: Their definition for ”objectives” doesn’t appear to be a definition at all, but rather an example. 

Practical Concern

Although I understand, conceptually, the point this definition is getting at, my first concern is practical in nature. As a Chief Information Security Officer (CISO), I invariably have more to do than I have resources to apply. Therefore, I must prioritize and prioritization requires comparison and comparison requires measurement. It isn’t clear to me how “uncertainty regarding deviation from the expected (positive and/or negative) that might affect my organization’s objectives” can be applied to measure, and thus compare and prioritize, the issues I’m responsible for dealing with. 

This is just an example though, and I don’t mean to pick on ISO because much of their work is stellar. I could have chosen any of several definitions in our industry and expressed varied concerns.

In my experience, information security is about managing how often loss takes place, and how much loss will be realized when/if it occurs. That is our profession’s value proposition, and it’s what management cares about. Consequently, whatever definition we use needs to align with this purpose. 

The Open Group’s Risk Taxonomy (shown below), based on Factor Analysis of Information Risk (FAIR), helps to solve this problem by providing a clear and practical definition for risk. In this taxonomy, Risk is defined as, “the probable frequency and probable magnitude of future loss.” 

The elements below risk in the taxonomy form a Bayesian network that models risk factors and acts as a framework for critically evaluating risk. This framework has been evolving for more than a decade now and is helping information security professionals across many industries understand, measure, communicate and manage risk more effectively.

In the communications context, you have to have a very clear understanding of what constitutes signal before you can effectively and reliably filter it out from noise. The Open Group’s Risk Taxonomy gives us an important foundation for achieving a much clearer signal.

I will be discussing this topic in more detail next week at The Open Group Conference in Newport Beach. For more information on my session or the conference, visit: http://www.opengroup.org/newportbeach2013.

This guest post comes courtesy of Jack Jones, principal of CXOWARE. He is also the author and creator of the Factor Analysis of Information Risk (FAIR) Framework. Jones will be a speaker at The Open Group Conference in Newport Beach, California this week.

Copyright The Open Group, 2013. All rights reserved.

You may also be interested in: