Three Big Data Trends Analysts Can Use in 2016 and Beyond

One of the byproducts of technology’s continued expansion is a high volume of data generated by the web, mobile devices, cloud computing and the Internet of Things (IoT). Converting this “big data” into usable information has created its own side industry, one that businesses can use to drive strategy and better understand customer behavior.

The big data industry requires analysts to stay up to date with the machinery, tools and concepts associated with big data, and how each can be used to grow the field. Let’s explore three trends currently shaping the future of the big data industry:

Big Data Analytics Degrees

Mostly due to lack of know-how, businesses aren’t tapping into the full potential of big data. In fact, most companies only analyze about 12 percent of the emails, text messages, social media, documents or other data-collecting channels available to them (Forrester). Many universities now offer programs for big data analytics degrees to directly acknowledge this skills gap. The programs are designed to administer analytical talent, train and teach the skillsets – such as programming language proficiency, quantitative analysis tool expertise and statistical knowledge – needed to interpret big data. Analysts predict the demand for industry education will only grow, making it essential for universities to adopt analytics-based degree programs.

Predicting Consumer Behaviors

Big data allows businesses to access and extract key insights about their consumer’s behavior. Predictive analytics challenges businesses to take data interpretation a step further by not only looking for patterns and trends, but using them to predict future purchasing habits or actions. In essence, predictive analytics, which is a branch of big data and data mining, allows businesses to make more data-based predictions, optimize processes for better business outcomes and anticipate potential risk.

Another benefit of predictive analytics is the impact it will have on industries such as health informatics. Health informatics uses electronic health record (EHR) systems to solve problems in healthcare such as effectively tracking a patient’s medical history. By documenting records in electronic format, doctors can easily track and assess a patient’s medical history from any certified access port. This allows doctors to make assumptions about a patient’s health using predictive analytics based on documented results.

Cognitive Machine Improvements

A key trend evolving in 2016 is cognitive improvement in machinery. As humans, we crave relationship and identify with brands, ideas and concepts that are relatable and easy to use. We expect technology will adapt to this need by “humanizing” the way machines retain memories and interpret and process information.

Cognitive improvement aims to solve computing errors, yet still predict and improve outcomes as humans would. It also looks to solve human mistakes, such as medical errors or miscalculated analytics reports. A great example of cognitive improvement is IBM’s Watson supercomputer. It’s classified as the leading cognitive machine to answer complex questions using natural language.

The rise of big data mirrors the rise of tech. In 2016, we will start to see trends in big data education, as wells as a shift in data prediction patterns and error solutions. The future is bright for business and analytic intelligence, and it all starts with big data.

Dr. Athanasios Gentimis

Dr. Athanasios (Thanos) Gentimis is an Assistant Professor of Math and Analytics at Florida Polytechnic University. Dr. Gentimis received a Ph.D. in Theoretical Mathematics from the University of Florida, and is knowledgeable in several computer programming/technical languages that include C++, FORTRAN, Python and MATLAB.

Source: Three Big Data Trends Analysts Can Use in 2016 and Beyond

Four Things You Need to Know about Your Customer Metrics

Customer Metrics
What Customer Metrics Do You Use?

A successful customer experience management (CEM) program requires the collection, synthesis, analysis and dissemination of different types of business metrics, including operational, financial, constituency and customer metrics (see Figure 1).  The quality of customer metrics necessarily impacts your understanding of how to best manage customer relationships to improve the customer experience, increase customer loyalty and grow your business. Using the wrong customer metrics could lead to sub-optimal decisions while using the right customer metrics can lead to good decisions that give you a competitive edge.  How do you know if you are using the right customer metrics in your CEM program? This post will help formalize a set of standards you can use to evaluate your customer metrics.

Customer Experience Management is EFM & CRM
Figure 1. Customer experience management is about collection, synthesis, analysis and dissemination of business metrics.

Customer Metrics

Customer metrics are numerical scores or indices that summarize customer feedback results. They can be based on either customer ratings (e.g., average satisfaction rating with product quality) or open-ended customer comments (via sentiment analysis). Additionally, customer ratings can be based on a single item or an aggregated set of items (averaging over a set of items to get a single score/metric).

Meaning of Customer Metrics

Customer metrics represent more than just numerical scores. Customer metrics have a deeper meaning, representing some underlying characteristic/mental processes about your customers: their opinions and attitudes about and intentions toward your company or brand. Figure 2 depicts this relationship between the feedback tool (questions) and the this overall score that we label as something.  Gallup claims to measure customer engagement (CE11) using 11 survey questions. Other practitioners have developed their unique metrics that assess underlying customer attitudes/intentions. The SERVQUAL method assesses several dimensions of service quality; the RAPID Loyalty approach measures three types of customer loyalty: retention, advocacy and purchasing. The Net Promoter Score® measures likelihood to recommend.

Figure 2. Advocacy Loyalty Index (customer metric) measures extent to which customers will advocate/ feel positively toward your company (underlying construct) using three items/questions.

Customer Metrics are Necessary for Effective CEM Programs but not Frequently Used

Loyalty leading companies compared to their loyalty lagging counterparts, adopt specific customer feedback practices that require the use of customer metrics: sharing customer results throughout the company, including customer feedback in company/executive dashboards, compensating employees based on customer feedback, linking customer feedback to operational metrics, and identify improvement opportunities that maximize ROI.

Despite the usefulness of customer metrics, few businesses gather them. In a study examining the use of customer experience (CX) metrics, Bruce Temkin found that only about half (52%) of businesses collect and communicate customer experience (CX) metrics. Even fewer of them review CX metrics with cross-functional teams (39%), tie compensation to CX metrics (28%) or make trade-offs between financial and CX metrics (19%).

Evaluating Your Customer Metrics

As companies continue to grow their CEM programs and adopt best practices, they will rely more and more on the use of customer metrics. Whether you are developing your own in-house customer metric or using a proprietary customer metric, you need to be able to critically evaluate them to ensure they are meeting the needs of your CEM program. Here are four questions to ask about your customer metrics.

1. What is the definition of the customer metric?

Customer metrics need to be supported by a clear description of what it is measuring. Basically, the customer metric is defined the way that words are defined in the dictionary. They are non-ambiguous and straightforward. The definition, referred to as the constitutive definition, not only tells you what the customer metric is measuring, it also tells you what the customer metric is not measuring.

The complexity of the definition will match the complexity of the customer metric itself. Depending on the customer metric, definitions can reflect a narrow concept or a more complex concept. For single-item metrics, definitions are fairly narrow. For example, a customer metric based on the satisfaction rating of a single overall product quality question would have the following definition: “Satisfaction with product quality”. For customer metrics that are made up of several items, a well-articulated definition is especially important. These customer metrics measure something more nuanced than single-item customer metrics. Try to capture the essence of the commonality shared across the different items. For example, if the ratings of five items about the call center experience (e.g., technical knowledge of rep, professionalism of rep, resolution) are combined into an overall metric, then the definition of the overall metric would be: “Overall satisfaction with call center experience.”

2. How is the customer metric calculated?

Figure 3. Two Measurement Criteria: Reliability is about precision; Validity is about meaning

Closely related to question 1, you need to convey precisely how the customer metric is calculated. Understanding how the customer metric is calculated requires understanding two things: 1) the specific items/questions in the customer metric; 2) how items/questions were combined to get to the final score. Knowing the specific items and how they are combined help define what the customer metric is measuring (operational definition). Any survey instructions and information about the rating scale (numerical and verbal anchors) need to be included.

3. What are the measurement properties of the customer metric?

Measurement properties refer to a scientifically-derived indices that describe the quality of a customer metric. Applying the field of psychometrics and scientific measurement standards (Standards for Educational and Psychological Testing), you can evaluate the quality of customer metrics. Analyzing existing customer feedback data, you are able to evaluate customer metrics along two criteria: 1) Reliability and 2) Validity. Reliability refers to measurement precision/consistency. Validity is concerned with what is being measured. Providing evidence of reliability and validity of your customer metrics is essential towards establishing a solid set of customer metrics for your CEM program. The relationship between these two measurement criteria is depicted in Figure 3. Your goal is to develop/select customer metrics that are both reliable and valid (top right quadrant).

Four Types of Reliability
Figure 4. Four Types of Reliability

While there are different kinds of reliability (see Figure 4), one in particular is especially important when the customer metric is made up of multiple items (e.g., most commonly, items are averaged to get one overall metric). Internal consistency reliability is a great summary index that tells you if the items should combined together. Higher internal consistency (above .80 is good; 1.0 is the maximum possible) tells you that the items measure one underlying construct; aggregating them makes sense. Low internal consistency tells you that the items are likely measuring different things and should not be aggregated together.

There are three different lines of validity evidence that help show that the customer metric actually measures what you think it is measuring. To establish that a customer metric assesses something real, you can look at the content of the items to determine how well they represent your variable of interest (establishing evidence of content validity), you can calculate how well the customer metric correlates with some external criteria (establishing evidence of criterion validity) and you can understand, through statistical relationships among different metrics, how your customer metric fits into a theoretical framework that distinguishes your customer metric from other customer metrics (e.g., How is the customer engagement metric different than the customer advocacy metric? - construct validity).

Figure 5. Evidence of criterion-related validity: Identifying which operational metrics are related to customer satisfaction with the service request (SR)

These three different lines of validity evidence demonstrate that the customer metric measures what it is intended to measure. Criterion-related validity evidence often involves linking customer metrics to other data sources (operational metrics, financial metrics, constituency metrics).

Exploring the reliability and validity of your current customer metrics has a couple of extra benefits. First, these types of analyses can improve the measurement properties of your current customer metrics by identifying unnecessary questions. Second, reliability and validity analysis can improve the overall customer survey by identifying CX questions that do not help explain customer loyalty differences. Removal of specific CX questions can significantly reduce survey length without loss of information.

4. How useful is the customer metric?

While customer metrics can be used for many types of analyses (e.g., driver, segmentation), their usefulness is demonstrated by the number and types of insights they provide. Your validation efforts to understand the quality of the customer metrics create a practical framework for making real organizational changes. Specifically, by understanding the causes and consequences of the customer metric, you can identify/create customer-centric operational metrics (See Figure 5) to help manage call center performance, understand how changes in the customer metric correspond to changes in revenue (See Figure 6) and identify customer-focused training needs and standards for employees (See Figure 7).

Figure 6. A useful customer metric (satisfaction with TAM) reveals real differences in business metrics (revenue)

Examples

Below are two articles on the development and validation of four customer metrics. One article focuses on three related customer metrics. The other article focuses on an employee metric. Even though this present blog post talked primarily about customer metrics, the same criteria can be applied to employee metrics.

In each article, I present the necessary information needed to critically evaluate each customer metric: 1) Clear definition of the customer metrics, 2) description of how metrics are calculated, 3) measurement properties (reliability/validity), 4) show that metrics are related to important outcomes (e.g., revenue, employee satisfaction). The articles are:

  • Hayes, B.E.  (2011). Lessons in loyalty. Quality Progress, March, 24-31. Paper discusses the development and validation of the RAPID Loyalty approach. Three reliable customer loyalty metrics are predictive of different types of business growth. Read entire article.
  • Hayes, B. E. (1994). How to measure empowerment. Quality Progress, 27(2), 41-46. Paper discusses need to define and measure empowerment. Researcher develops reliable measure of employee perceptions of empowerment, the Employee Empowerment Questionnaire (EEQ). The EEQ was related to important employee attitudes (job satisfaction). Read entire article.
Figure 7. Evidence of Criterion-Related Validity: Satisfaction with TAM Performance (customer metric) is related to TAM training.

Summary

A customer metric is good when: 1) it is supported with a clear definition of what it measures and what is does not measure; 2) there is a clear method of how the metric is calculated, including all items and how they are combined; 3) there is good reliability and validity evidence regarding how well the customer metric measures what it is supposed to measure; 4) they are useful in helping drive real internal changes (e.g., improved marketing, sales, service) that lead to measurable business growth (e.g., increased revenue, decreased churn).

Using customer metrics that meet these criteria will ensure your CEM program is effective in improving how your manage the customer relationship. Clear definitions of the metrics and accompanying descriptions of how they are calculated help improve communications regarding customer feedback. Different employees, across job levels or roles, can now speak a common language about feedback results. Establishing the reliability and validity of the metrics gives senior executives the confidence they need to use customer feedback as part of their decision-making process.

The bottom line: a good customer metric provides information that is reliable, valid and useful.

Source

Why Focus Groups Don’t Work And Cost Millions

030120.focusgroup
We all know what “focus group” is and what it is used for. What we don’t admit quickly is that it has little use and that we all deal with it acting old school. With changing consumer ecosystem, we should think of some other more quantitative technique that is more relevant to the current stage. With ever evolving technology and sophisticated tools, there is no reason to feel otherwise. Focus group was never an efficient way to measure product-market fit. But, considering it was the only thing that was easily available that could provide a decent start; industry went with it. We are now at a point where we could change and upgrade ourselves to harness better ways to measure potential product need and adoption.

Few of the downsides of using focus group

Unnatural settings for participants
Consider a situation where a bunch of strangers come together and discuss about some product that they have not seen before. When in real life would such an incident occur? Why would someone speak honestly without any trust between moderator and the participant? This is not a natural setting where anyone experiences a real product. So why should we use this template to make decisions?

Not in accord of how a real decision process works
Calling people and having them sit in a group and vouch for product is not how we should decide on the attractiveness/adoption of a product. There are several other things that work in tandem to influence our decision making process spend on a product and those are almost impossible to replicate in focus group sessions. For example – In real life, most of the people depend on word of mouth and suggestions from friends and family to try and adopt a new product. Such a flaw induces greater margin of error in data gathered from such groups.

Motivation for the participants is different
This is another area which makes focus group less reliable area to focus on. Consider why someone will ever detach from their day-to-day lives to come to a focus group. The reasons could be many, namely – Money, early adopter, ability to meet / network with people etc. Such variation in experience and motivation for participants induces more noise than signals.

Not a right framework for asking for snap judgment on products
Another interesting point against focus group template is its framework to gather people out of the blue, have them experience product for the first time and ask for their opinion. Everyone brings their own speed to the table when it comes to understanding the product. So, how can it be not flawed when everyone is asked at same short interval to share their opinion? This also induces error in findings.

Little is useless and more is expensive
We all know that the background for the participants is highly variable, and it is almost impossible to carve a niche out of the participants. If few participants are invited, it is extremely hard to pin-point the needs of participants, and if we invite too many, it will be an expensive model and with all the error and flaws in it. This makes focus group model useless and costly.

It is not about the product but the experience
A product never alone work on its own, it often works in conjunction with experience that is delivered by other dependent areas. And cumulative interactions deliver the product experience. In focus group, it is extremely difficult to deliver an exact experience as it has not been built into the mix yet. Experience comes after numerous product iterations with customers. So, in initial stages, it is extremely difficult to suggest anything by just quick hands on with product and no experience build around it.

Innovation suppressant
Consider a case where iTunes is pitched to focus group. “iTunes is a place where you could buy individual songs and not the whole album, yes online and no, No CDs”. Have you ever wondered how that will fly? Focus group is great in suggesting something right in the ally of what is already present today. If there is a groundbreaking product whose market has not yet been explored, it could induce some uneasiness and could easily meet with huge rejection. So, focus groups are pretty much innovation killers.

People might not be honest unintentionally
Consider a case where you are asked about your true feelings for a product in a room full with people who think highly about it. Wouldn’t it skew your observation as well? We all have a strong tendency to bend towards political correctness causing us to skew actual findings. There are other such biases caused by group think, dominating personality in the room etc. that have been identified to invalidate the findings of the focus group sessions. This introduces error in judgment and makes collected data erroneous.

Above stated reasons are few of many that make a focus group obsolete, erroneous and unreliable. So, we should avoid using them and we should substitute it with other more effective ways.

So, what’s next? What should companies do? Let’s leave it to another day, and another blog. Catch you all soon.

Source: Why Focus Groups Don’t Work And Cost Millions by d3eksha

What is the Value of International Polls about the US Presidential Candidates?

I saw the results of a recent opinion poll about the US presidential election that amazed me. While many recent polls of US voters reveal a virtual tie in presidential race between Barack Obama and Mitt Romney, a BBC poll surveying citizens from other countries about the US president found overwhelming support for Barack Obama over Mitt Romney. In this late summer/early fall study by GlobeScan and PIPA of over 20,000 people across 21 countries, 50% favored Obama and 9% favored Mr Romney.

Global Businesses Needs Global Feedback

Companies conducting international business regularly poll their customers and prospects across the different countries they serve in hopes to get better insights about how to run their business. They use this feedback to help them understand where to enter new markets, guide product development, and improve service quality, just to name a few. The end goal is to create a loyal customer base (e.g., customers come back, recommend and expand relationship).

The US government’s policies impact international relations on many levels (e.g., economically, financially and socially). Could there be some value from this international poll for the candidates themselves and their constituencies?

Looking at the results of the poll, there are few implications that stand out to me:

  1. The Romney brand has little international support. Mitt Romney has touted that his business experience has prepared him to be an effective president. How can he use these results to improve his image abroad?
  2. Many international citizens do not care about the US presidency (in about half of the countries, fewer than 50% of respondents did not express an opinion for either Obama or Romney).
  3. After four years of an Obama presidency, the international community continues to support the re-election of Obama. Obama received comparable results in 2008.

I like to use data whenever possible to help me guide my decisions. However, I will be the first to admit that I am no expert on international relations. So, I am seeking help from my readers. Here are three questions:

  1. Are these survey results useful to help guide US constituencies’ voting decision?
  2. Is international citizenry survey results about the US presidential candidates analogous to international customer survey results about US companies?
  3. If you owned a company and where selling the Obama and Romney brand, how would you use these survey results (barring simply ignoring them) to improve international customer satisfaction?

I would love to hear your opinions.

Source: What is the Value of International Polls about the US Presidential Candidates?

Are U.S. Hospitals Delivering a Better Patient Experience?

The Centers for Medicare & Medicaid Services (CMS) use patient feedback about their care as part of their reimbursement plan for acute care hospitals. Under the Hospital Value-Based Purchasing Program, CMS makes value-based incentive payments to acute care hospitals, based either on how well the hospitals perform on certain quality measures or how much the hospitals’ performance improves on certain quality measures from their performance during a baseline period. This program began in FY 2013 for discharges occurring on or after October 1, 2012.

A standard patient satisfaction survey, known as HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems), is the source of the patient feedback for the reimbursement program. I have previously used these publicly available HCAHPS data to understand the state of affairs for US hospitals in 2011 (see Big Data Provides Big Insights for U.S. Hospitals). Now that the Value-Based Purchasing program has been in effect since October 2012, I wanted to revisit the HCAHPS patient survey data to determine if US hospitals have improved. First, let’s review the HCAHPS survey.

The HCAHPS Survey

The survey asks a random sample of recently discharged patients about important aspects of their hospital experience. The data set includes patient survey results for US hospitals on ten measures of patients’ perspectives of care. The 10 measures are:

  1. Nurses communicate well
  2. Doctors communicate well
  3. Received help as soon as they wanted (Responsive)
  4. Pain well controlled
  5. Staff explain medicines before giving to patients
  6. Room and bathroom are clean
  7. Area around room is quiet at night
  8. Given information about what to do during recovery at home
  9. Overall hospital rating
  10. Recommend hospital to friends and family (Recommend)

For questions 1 through 7, respondents were asked to provide frequency ratings about the occurrence of each attribute (Never, Sometimes, Usually, Always). For question 8, respondents were provided a Y/N option. For question 9, respondents were asked to provide an overall rating of the hospital on a scale from 0 (Worst hospital possible) to 10 (Best hospital possible). For question 10, respondents were asked to provide their likelihood of recommending the hospital (Definitely no, Probably no, Probably yes, Definitely yes).

The Metrics

The HCAHPS data sets report metrics for each hospital as percentages of responses. Because the data sets have already been somewhat aggregated (e.g., percentages reported for group of response options), I was unable to calculate average scores for each hospital. Instead, I used top box scores as the metric of patient experience. I found that top box scores are highly correlated with average scores across groups of companies, suggesting that these two metrics tell us the same thing about the companies (in our case, hospitals).

Top box scores for the respective rating scales are defined as: 1) Percent of patients who reported “Always”; 2) Percent of patients who reported “Yes”; 3) Percent of patients who gave a rating of 9 or 10; 4) Percent of patients who said “Definitely yes.”

Top box scores provide an easy-to-understand way of communicating the survey results for different types of scales. Even though there are four different rating scales for the survey questions, using a top box reporting method puts all metrics on the same numeric scale. Across all 10 metrics, hospital scores can range from 0 (bad) to 100 (good).

I examined PX ratings of acute care hospitals across two time periods. The two time periods were 2011 (Q3 2010 through Q2 2011) and 2013 (Q4 2012 through Q3 2013). The data from the 2013 time-frame are the latest publicly available patient survey data as of this writing.

Results: Patient Satisfaction with US Hospitals Increasing

Patient Advocacy Trends for Acute Care Hospitals in US
Figure 1. Patient advocacy has increased for US hospitals

Figure 1 contains the comparisons for patient advocacy ratings for US hospitals across the two time periods. Paired T-tests comparing the three loyalty metrics across the two time periods were statistically significant, showing that patients are reporting higher levels of loyalty toward hospitals in 2013 compared to 2011. This increase in patient loyalty, while small, is still real.

Greater gains in patient loyalty have been seen for Overall Hospital Rating (increase of 2.26) compared to Recommend (increase of 1.09).

Figure 2. Patient Experience Trends
Figure 2. Patient satisfaction with their in-patient experience has increased for US hospitals

Figure 2 contains the comparisons for patient experience ratings for US hospitals across the two time periods. Again, paired T-tests comparing the seven PX metrics across the two time periods were statistically significant, showing that patients are reporting higher levels of satisfaction with their in-patient experience in 2013 compared to 2011.

The biggest increases in satisfaction were seen in “Given information about recovery,” “Staff explained meds” and “Responsive.” The smallest increases in satisfaction were seen for “Doctor communication” and “Pain well controlled.”

Summary

Hospital reimbursements are based, in part, on their patient satisfaction ratings. Consequently, hospital executives are focusing their efforts at improving the patient experience.

Comparing HCAHPS patient survey results from 2011 to 2013, it appears that hospitals have improved how they deliver patient care. Patient loyalty and PX metrics show significant improvements from 2011 to 2013.

Originally Posted at: Are U.S. Hospitals Delivering a Better Patient Experience? by bobehayes

RSPB Conservation Efforts Take Flight Thanks To Data Analytics

shutterstock_241349182-684x250

Big data may be helping to change the way we interact with the world around us, but how much can it do to help the wildlife that shares our planet?

With hundreds of species to track across the UK, ornithological charity the RSPB accrues huge amounts of data every year as it tries to ensure its efforts help as many birds as possible.

And in order to ensure they stay on top of this mountain of data, the charity has teamed up with analytics specialists SAS to develop and create more in-depth research and conservation efforts which should benefit birds around the country.

image: http://www.techweekeurope.co.uk/wp-content/uploads/2015/05/rspb-logo.jpg

rspb logoFlying high

“We need to make sense of a variety of large and complex data sets. For example, tracking the movements of kittiwakes and gannets as they forage at sea produces millions of data points,” said Dr. Will Peach, head of research delivery at RSPB.

“Conservation informed by statistical evidence is always more likely to succeed than that based solely on guesswork or anecdote. SAS allows us to explore the data to provide the evidence needed to confidently implement our initiatives.”

So far, the RSPB has implemented SAS’ advanced analytics solutions to combine datasets on yellowhammer and skylark nesting success with pesticide use and agriculture cropping patterns to judge the consequences for the birds.

RSPB also turned to SAS to explore how albatross forage across the Southern Ocean.

With large-scale commercial longline fishing killing tens of thousands of albatrosses a year, the goal was to cut down on the death rate and protect the 17 albatross species currently at risk.

The society took data from tags worn by the birds, merging it with external data sets like sea-surface temperatures and the location of fishing grounds.

“Scientific research is extremely fast-moving and there are now huge volumes of data to analyse,” said Andy Cutler, director of strategy at SAS UK & Ireland.

“SAS is able to provide a means of managing all the data and then apply cutting-edge analytical techniques that deliver valuable insights almost immediately. For example, through analysing previously non-informative data, RSPB is now able to intervene and correct the breeding problems faced by various bird species during treacherous migration journeys.”
Read more at http://www.techweekeurope.co.uk/data-storage/business-intelligence/rspb-conservation-sas-data-analytics-167988#Dzdo3ud6Ej3vt6ZC.99

RSPB Conservation Efforts Take Flight Thanks To Data Analytics
Read more at http://www.techweekeurope.co.uk/data-storage/business-intelligence/rspb-conservation-sas-data-analytics-167988#Dzdo3ud6Ej3vt6ZC.99

Source: RSPB Conservation Efforts Take Flight Thanks To Data Analytics by analyticsweekpick

Betting the Enterprise on Data with Cloud-Based Disaster Recovery and Backups

One of the more pressing consequences of truly transitioning to a data-driven company culture is a renewed esteem for the data—valued as an asset—that gives the enterprise its worth. Unlike other organizational assets, protecting data requires more than mere security measures. It necessitates reliable, test-worthy backup and disaster recovery plans that can automate these vital processes to account for virtually any scenario, especially some of the more immediate ones involving:
  • Ransomware: Ransomware attacks are increasing in incidence and severity. They occur when external entities deploy malware to encrypt organizational data using similar, if not more effective, encryption measures that those same organizations do and only release the data after being paid to do so. “Ransomware was not something that many people worried about a couple years ago,” Unitrends VP of Product Marketing Dave LeClair acknowledged. “Now it’s something that almost every company that I’ve talked to has been hit. The numbers are getting truly staggering how frequently ransomware attacks are hitting IT, encrypting their data, and demanding payments to unencrypt it from these criminal organizations.”
  • Downtime: External threats are not the only factors that engender IT downtime. Conventional maintenance and updating measures for various systems also result in situations in which organizations cannot access or leverage their data. In essential time-sensitive applications, cloud-based disaster recovery and backup solutions ensure business continuity.
  •  Contemporary IT Environments: Today’s IT environments are much more heterogeneous than they once were. It is not uncommon for organizations to utilize existing legacy systems alongside cloud-based applications and those involving virtualization. Cloud disaster recovery and data backup platforms preserve connected continuity in a singular manner to reduce costs and increase the efficiency of backup systems.
  • Acts of Nature: The increasing reliance on technology is still susceptible to unforseen acts based on weather conditions, natural disasters, and even man-made ones—in which case cloud options for recovery and backups are the most desirable because they store valued data offsite.

Additionally, when one considers that the primary benefits of the cloud are its low cost storage—at scale—and ubiquity of access regardless of location or time, cloud disaster recovery and backup solutions are a logical extension of enterprise infrastructure. “The new technologies, because of the ability of doing things in the cloud, kind of democratizes it so that anybody can afford to have a DR environment, particularly for their critical applications,” LeClair remarked.

Recovery and Backup Basics
There are a multitude of ways that organizations can leverage cloud recovery and data backup options to readily restore production capabilities in the event of system failure:

  • Replication: Replication is the means by which data is copied elsewhere—in this case, to the cloud for storage. Data can also be replicated to other forms of storage (i.e. disk or tape) and be transmitted to a cloud service provider that way.
  • Archives/Checkpoints: Archives or checkpoints are states of data at particular points in time for a data set which are preserved within a system. Therefore, organizations can always revert their system data to an archive to restore it to a time before some sort of failure occurred. According to LeClair, this capability is an integral way of mitigating the effects of ransomware: “You can simply rollback the clock, to the point before you got encrypted, and you can restore your system so you’re good to go”.
  • Instant Recovery Solutions: These solutions not only restore systems to a point in time prior to events of failure, but even facilitate workload management based on the backup appliance itself. This capability is critical in instances in which on-premise systems are still down. In such an event, the appliance’s compute power and storage replace those of the primary solution, which “allows you to spin off that workload in less than five minutes so you can get back up and running,” Le Clair said.
  • Incremental Forevers: This recovery and backup technique is particularly useful because it involves a full backup of a particular data set or application, and subsequently only backs up changes to that initial backup. Such utility is pivotal to massive quantities of big data.

Cloud Replication
There are many crucial considerations when leveraging the cloud as a means of recovery and data backup. Foremost of these is the replication process of copying data from on premises to the cloud. “It absolutely is an issue, particularly if you have terabytes of data,” LeClair mentioned. “If you’re a decent sized enterprise and you have 50 or 100 terabytes of data that you need to move from your production environment to the cloud, that can take weeks.” Smaller cloud providers such as Unitrends can issue storage to organizations via disk, which is then overnighted and uploaded to the cloud so that, on an ongoing basis, organizations only need to replicate the changes of their data.

Machine Transformation
Another consideration pertains to actually utilizing that data in the cloud due to networking concerns. “Networking in cloud generally works very differently than what happens on premise,” LeClair observed. Most large public cloud providers (such as Amazon Web Services) have networking constraints regarding interconnections that require significant IT involvement to configure. However, competitive disaster recovery and backup vendors have dedicated substantial resources to automating various facets of recovery, including all of the machine transformation (transmogrification) required to provision a production environment in the cloud.

Merely replicating data into the cloud is just the first step. The larger concern for actually utilizing it there in cases of emergency requires provisioning the network, which certain cloud platforms can do automatically so that, “You have a DR environment without having to actually dedicate any compute resources yet,” LeClair said. “You basically have your data that’s replicated into Amazon, and you have all the configuration data necessary to spin off that data if you need to. It’s a very cost-effective way to keep yourself protected.”
Recovery Insurance
The automation capabilities of cloud data recovery and back-up solutions also include testing, which is a vital prerequisite for actually ensuring that such systems function properly on demand. Traditionally, organizations tested their recovery environments sparingly, if at all. “There’s now technology that essentially automates your DR environment, so you don’t have to pull up human resources and time into it,” LeClair said. In many instances, those automation capabilities hinge upon the cloud, which has had a considerable impact on the capabilities for disaster recovery and backup. The overarching effect is that it renders data recovery and backup more consistent, cheaper, and easier to facilitate in an increasingly complicated and preeminent IT world.

Source: Betting the Enterprise on Data with Cloud-Based Disaster Recovery and Backups by jelaniharper