How To Calculate Average Sales

No matter what industry you’re in, any sector that deals with customers will have to keep track of their sales. When you need a quick way to monitor your company’s success in meeting objectives, sales provide one of the easiest metrics as it is a direct display of efficiency related to profits. Even so, raw sales data can be overwhelming and may not always paint the clearest picture.

Using average sales across different periods can give you a better idea of how well your sales strategies and marketing campaigns are performing, what tactics are connecting with consumers, and how successful your sales team is at converting leads. More importantly, it gives you a straightforward way to establish a standard for measuring success and failure. Calculating average sales is an uncomplicated process and can help steer your business decisions for greater success.

Why Measure Average Sales?

More than just an eagle’s eye view of your sales operations, average sales can also give you a granular view at the results of every sale. Measuring average sales by customer can deliver useful insights such as how many dollars customers are spending at the point of sale, and how it compares to historical data.

On a broader level, you can compare the efficiency of different teams, stores, and branches by measuring their monthly and daily sales against historic averages and each other. This is important when choosing how to allocate budgets, deciding where to trim resources, and providing greater support. By understanding the historic patterns and combining it with more real-time data, you can make smarter decisions regarding your sales pipeline.

Looking for other ways to measure your sales numbers? Explore our interactive sales dashboards!


How to Calculate Average Sales

Calculating your average sales depends on two factors: a period or frequency you want to analyze and the total sales value for that period. Average sales can be measured on a much smaller scale, such as daily or weekly, or on a larger scale like monthly and even annually. To calculate the average sales over your chosen period, you can simply find the total value of all sales orders in the chosen timeframe and divide by the intervals. For example, you can calculate average sales per month by taking the value of sales over a year and dividing by 12 (the number of months in the year). If the total sales for the year were $1,000,000, monthly sales would be calculated as follows:

Average sales

Average sales per month, in this case, would be roughly $83,000. Daily average sales are also a common calculation, and they can vary based on the broader timeframe being measured. For example, you could measure daily average sales over a period of a single month to compare year-over-year data or calculate daily average sales over a full year to see how stores and sales teams performed throughout a 12-month period. In this case, the calculation would not change, except for replacing the top number for annual total sales, and dividing by the total number of work days.

A Variant Average Sales Calculation

Another useful way to track the average value of a sale is to measure how effective your sales team is on a per-customer basis. While overall visitors and the number of sales may be on the rise, if the value of sales per customer is declining, your overall revenues may actually fall. In this case, the division is similar to average sales, but instead of a time frame, you can divide the total sales value by the number of transactions completed during the period you are analyzing. For instance, if your total sales for the day were $15,000, and you completed 35 unique transactions, the average value of sales would be approximately $528 per customer. The formula to calculate average sales value is as follows:

average sales

Other KPIs You Can Include

Average sales are a great place to start tracking your sales effort, but to gain more actionable insights, your dashboard should also include other KPIs that can provide useful context. These are just a few of the useful sales dashboard examples of KPIs you can include when building your BI platform.

  • Average Revenue Per Unit (ARPU) – This metric is like average sale value but measures how much revenue a single customer or user will generate. This number is found by measuring revenue against the total number of units.
  • Sales per Rep – Average sales don’t give you a look into how individual salespeople may be performing. Adding sales per rep will provide a more granular look at your sales operations.
  • Opex to Sales – Raw sales data provides insight, but little context. Understanding how operating expenses relate to sales helps clarify the real value of a sale. If the Opex is too high, even large sales offer little real value.
  • Looking for other ways to measure your sales numbers? Explore our interactive sales dashboards!

    Source

Feb 14, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Conditional Risk  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Battling Misinformation in Customer Experience Management by bobehayes

>> RSPB Conservation Efforts Take Flight Thanks To Data Analytics by analyticsweekpick

>> @SidProbstein / @AIFoundry on Leading #DataDriven Technology Transformation by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 North America Quality Management In Healthcare Market Growth Analysis, and Forecast Including Factors … – Industry Strategy Under  Health Analytics

>>
 New Orioles GM Mike Elias brings former Astros analytics chief Sig Mejdal along to Baltimore – Baltimore Sun Under  Analytics

>>
 Nutanix Joins IoT And Edge Computing Bandwagon With Xi IoT Platform – Forbes Under  IOT

More NEWS ? Click Here

[ FEATURED COURSE]

Probability & Statistics

image

This course introduces students to the basic concepts and logic of statistical reasoning and gives the students introductory-level practical ability to choose, generate, and properly interpret appropriate descriptive and… more

[ FEATURED READ]

Storytelling with Data: A Data Visualization Guide for Business Professionals

image

Storytelling with Data teaches you the fundamentals of data visualization and how to communicate effectively with data. You’ll discover the power of storytelling and the way to make data a pivotal point in your story. Th… more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:How do you assess the statistical significance of an insight?
A: * is this insight just observed by chance or is it a real insight?
Statistical significance can be accessed using hypothesis testing:
– Stating a null hypothesis which is usually the opposite of what we wish to test (classifiers A and B perform equivalently, Treatment A is equal of treatment B)
– Then, we choose a suitable statistical test and statistics used to reject the null hypothesis
– Also, we choose a critical region for the statistics to lie in that is extreme enough for the null hypothesis to be rejected (p-value)
– We calculate the observed test statistics from the data and check whether it lies in the critical region

Common tests:
– One sample Z test
– Two-sample Z test
– One sample t-test
– paired t-test
– Two sample pooled equal variances t-test
– Two sample unpooled unequal variances t-test and unequal sample sizes (Welch’s t-test)
– Chi-squared test for variances
– Chi-squared test for goodness of fit
– Anova (for instance: are the two regression models equals? F-test)
– Regression F-test (i.e: is at least one of the predictor useful in predicting the response?)

Source

[ VIDEO OF THE WEEK]

Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

 Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Torture the data, and it will confess to anything. – Ronald Coase

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @ScottZoldi, @FICO

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @ScottZoldi, @FICO

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

571 new websites are created every minute of the day.

Sourced from: Analytics.CLUB #WEB Newsletter

Why “Big Data” Is a Big Deal

DATA NOW STREAM from daily life: from phones and credit cards and televisions and computers; from the infrastructure of cities; from sensor-equipped buildings, trains, buses, planes, bridges, and factories. The data flow so fast that the total accumulation of the past two years—a zettabyte—dwarfs the prior record of human civilization. “There is a big data revolution,” saysWeatherhead University Professor Gary King. But it is not the quantity of data that is revolutionary. “The big data revolution is that now we can do something with the data.”

The revolution lies in improved statistical and computational methods, not in the exponential growth of storage or even computational capacity, King explains. The doubling of computing power every 18 months (Moore’s Law) “is nothing compared to a big algorithm”—a set of rules that can be used to solve a problem a thousand times faster than conventional computational methods could. One colleague, faced with a mountain of data, figured out that he would need a $2-million computer to analyze it. Instead, King and his graduate students came up with an algorithm within two hours that would do the same thing in 20 minutes—on a laptop: a simple example, but illustrative.

New ways of linking datasets have played a large role in generating new insights. And creative approaches to visualizing data—humans are far better than computers at seeing patterns—frequently prove integral to the process of creating knowledge. Many of the tools now being developed can be used across disciplines as seemingly disparate as astronomy and medicine. Among students, there is a huge appetite for the new field. A Harvard course in data science last fall attracted 400 students, from the schools of law, business, government, design, and medicine, as well from the College, the School of Engineering and Applied Sciences (SEAS), and even MIT. Faculty members have taken note: the Harvard School of Public Health (HSPH) will introduce a new master’s program in computational biology and quantitative genetics next year, likely a precursor to a Ph.D. program. In SEAS, there is talk of organizing a master’s in data science.

“There is a movement of quantification rumbling across fields in academia and science, industry and government and nonprofits,” says King, who directs Harvard’sInstitute for Quantitative Social Science (IQSS), a hub of expertise for interdisciplinary projects aimed at solving problems in human society. Among faculty colleagues, he reports, “Half the members of the government department are doing some type of data analysis, along with much of the sociology department and a good fraction of economics, more than half of the School of Public Health, and a lot in the Medical School.” Even law has been seized by the movement to empirical research—“which is social science,” he says. “It is hard to find an area that hasn’t been affected.”

The story follows a similar pattern in every field, King asserts. The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value. As an example, he describes how Kevin Quinn, formerly an assistant professor of government at Harvard, ran a contest comparing his statistical model to the qualitative judgments of 87 law professors to see which could best predict the outcome of all the Supreme Court cases in a year. “The law professors knew the jurisprudence and what each of the justices had decided in previous cases, they knew the case law and all the arguments,” King recalls. “Quinn and his collaborator, Andrew Martin [then an associate professor of political science at Washington University], collected six crude variables on a whole lot of previous cases and did an analysis.” King pauses a moment. “I think you know how this is going to end. It was no contest.” Whenever sufficient information can be quantified, modern statistical methods will outperform an individual or small group of people every time.

In marketing, familiar uses of big data include “recommendation engines” like those used by companies such as Netflix and Amazon to make purchase suggestions based on the prior interests of one customer as compared to millions of others. Target famously (or infamously) used an algorithm to detect when women were pregnant by tracking purchases of items such as unscented lotions—and offered special discounts and coupons to those valuable patrons. Credit-card companies have found unusual associations in the course of mining data to evaluate the risk of default: people who buy anti-scuff pads for their furniture, for example, are highly likely to make their payments.

In the public realm, there are all kinds of applications: allocating police resources by predicting where and when crimes are most likely to occur; finding associations between air quality and health; or using genomic analysis to speed the breeding of crops like rice for drought resistance. In more specialized research, to take one example, creating tools to analyze huge datasets in the biological sciences enabled associate professor of organismic and evolutionary biology Pardis Sabeti, studying the human genome’s billions of base pairs, to identify genes that rose to prominence quickly in the course of human evolution, determining traits such as the ability to digest cow’s milk, or resistance to diseases like malaria.

King himself recently developed a tool for analyzing social media texts. “There are now a billion social-media posts every two days…which represent the largest increase in the capacity of the human race to express itself at any time in the history of the world,” he says. No single person can make sense of what a billion other people are saying. But statistical methods developed by King and his students, who tested his tool on Chinese-language posts, now make that possible. (To learn what he accidentally uncovered about Chinese government censorship practices, see“Reverse-engineering Chinese Censorship.”)

King also designed and implemented “what has been called the largest single experimental design to evaluate a social program in the world, ever,” reports Julio Frenk, dean of HSPH. “My entire career has been guided by the fundamental belief that scientifically derived evidence is the most powerful instrument we have to design enlightened policy and produce a positive social transformation,” says Frenk, who was at the time minister of health for Mexico. When he took office in 2000, more than half that nation’s health expenditures were being paid out of pocket—and each year, four million families were being ruined by catastrophic healthcare expenses. Frenk led a healthcare reform that created, implemented, and then evaluated a new public insurance scheme, Seguro Popular. A requirement to evaluate the program (which he says was projected to cost 1 percent of the GDP of the twelfth-largest economy in the world) was built into the law. So Frenk (with no inkling he would ever come to Harvard), hired “the top person in the world” to conduct the evaluation, Gary King.

Given the complications of running an experiment while the program was in progress, King had to invent new methods for analyzing it. Frenk calls it “great academic work. Seguro Popular has been studied and emulated in dozens of countries around the world thanks to a large extent to the fact that it had this very rigorous research with big data behind it.” King crafted “an incredibly original design,” Frenk explains. Because King compared communities that received public insurance in the first stage (the rollout lasted seven years) to demographically similar communities that hadn’t, the results were “very strong,” Frenk says: any observed effect would be attributable to the program. After just 10 months, King’s study showed that Seguro Popular successfully protected families from catastrophic expenditures due to serious illness, and his work provided guidance for needed improvements, such as public outreach to promote the use of preventive care.

King himself says big data’s potential benefits to society go far beyond what has been accomplished so far. Google has analyzed clusters of search terms by region in the United States to predict flu outbreaks faster than was possible using hospital admission records. “That was a nice demonstration project,” says King, “but it is a tiny fraction of what could be done” if it were possible for academic researchers to access the information held by companies. (Businesses now possess more social-science data than academics do, he notes—a shift from the recent past, when just the opposite was true.) If social scientists could use that material, he says, “We could solve all kinds of problems.” But even in academia, King reports, data are not being shared in many fields. “There are even studies at this university in which you can’t analyze the data unless you make the original collectors of the data co-authors.”

The potential for doing good is perhaps nowhere greater than in public health and medicine, fields in which, King says, “People are literally dying every day” simply because data are not being shared.

Bridges to Business

NATHAN EAGLE, an adjunct assistant professor at HSPH, was one of the first people to mine unstructured data from businesses with an eye to improving public health in the world’s poorest nations. A self-described engineer and “not much of an academic” (despite having held professorships at numerous institutions including MIT), much of his work has focused on innovative uses of cell-phone data. Drawn by the explosive growth of the mobile market in Africa, he moved in 2007 to a rural village on the Kenyan coast and began searching for ways to improve the lives of the people there. Within months, realizing that he would be more effective sharing his skills with others, he began teaching mobile-application development to students in the University of Nairobi’s computer-science department.

While there, he began working with the Kenyan ministry of health on a blood-bank monitoring system. The plan was to recruit nurses across the country to text the current blood-supply levels in their local hospitals to a central database. “We built this beautiful visualization to let the guys at the centralized blood banks in Kenya see in real time what the blood levels were in these rural hospitals,” he explains, “and more importantly, where the blood was needed.” In the first week, it was a giant success, as the nurses texted in the data and central monitors logged in every hour to see where they should replenish the blood supply. “But in the second week, half the nurses stopped texting in the data, and within about a month virtually no nurses were participating anymore.”

Eagle shares this tale of failure because the episode was a valuable learning experience. “The technical implementation was bulletproof,” he says. “It failed because of a fundamental lack of insight on my part…that had to do with the price of a text message. What I failed to appreciate was that an SMS represents a fairly substantial fraction of a rural nurse’s day wage. By asking them to send that text message we were asking them to essentially take a pay cut.”

Fortunately, Eagle was in a position to save the program. Because he was already working with most of the mobile operators in East Africa, he had access to their billing systems. The addition of a simple script let him credit the rural nurses with a small denomination of prepaid air time, about 10 cents’ worth—enough to cover the cost of the SMS “plus about a penny to say thank you in exchange for a properly formatted text message. Virtually every rural nurse reengaged,” he reports, and the program became a “relatively successful endeavor”—leading him to believe that cell phones could “really make an impact” on public health in developing nations, where there is a dearth of data and almost no capacity for disease surveillance.

Eagle’s next project, based in Rwanda, was more ambitious, and it also provided a lesson in one of the pitfalls of working with big data: that it is possible to findcorrelations in very large linked datasets without understanding causation. Working with mobile-phone records (which include the time and location of every call), he began creating models of people’s daily and weekly commuting patterns, termed their “radius of generation.” He began to notice patterns. Abruptly, people in a particular village would stop moving as much; he hypothesized that these patterns might indicate the onset of a communicable disease like the flu. Working with the Rwandan ministry of health, he compared the data on cholera outbreaks to his radius of generation data. Once linked, the two datasets proved startlingly powerful; the radius of generation in a village dropped two full weeks before a cholera outbreak. “We could even predict the magnitude of the outbreak based on the amount of decrease in the radius of generation,” he recalls. “I had built something that was performing in this unbelievable way.”

And in fact it was unbelievable. He tells this story as a “good example of why engineers like myself shouldn’t be doing epidemiology in isolation—and why I ended up joining the School of Public Health rather than staying within a physical-science department.” The model was not predicting cholera outbreaks, but pinpointing floods. “When a village floods and roads wash away, suddenly the radius of generation decreases,” he explains. “And it also makes the village more susceptible in the short term to a cholera outbreak. Ultimately, all this analysis with supercomputers was identifying where there was flooding—data that, frankly, you can get in a lot of other ways.”

Despite this setback, Eagle saw what was missing. If he could couple the data he had from the ministry of health and the mobile operators with on-the-ground reports of what was happening, then he would have a powerful tool for remote disease surveillance. “It opened my eyes to the fact that big data alone can’t solve this type of problem. We had petabytes* of data and yet we were building models that were fundamentally flawed because we didn’t have real insight about what was happening” in remote villages. Eagle has now built a platform that enables him to survey individuals in such countries by paying them small denominations of airtime (as with the Kenyan nurses) in exchange for answering questions: are they experiencing flu-like symptoms, sleeping under bednets, or taking anti-malarials? This ability to gather and link self-reported information to larger datasets has proven a powerful tool—and the survey technology has become a successful commercial entity named Jana, of which Eagle is co-founder and CEO.

New Paradigms—and Pitfalls

WILLY SHIH, Cizik professor of management practice at Harvard Business School, says that one of the most important changes wrought by big data is that their use involves a “fundamentally different way of doing experimental design.” Historically, social scientists would plan an experiment, decide what data to collect, and analyze the data. Now the low cost of storage (“The price of storing a bit of information has dropped 60 percent a year for six decades,” says Shih) has caused a rethinking, as people “collect everything and then search for significant patterns in the data.”

“This approach has risks,” Shih points out. One of the most prominent is data dredging, which involves searching for patterns in huge datasets. A traditional social-science study might assert that the results are significant with 95 percent confidence. That means, Shih points out, “that in one out of 20 instances” when dredging for results, “you will get results that are statistically significant purely by chance. So you have to remember that.” Although this is true for any statistical finding, the enormous number of potential correlations in very large datasets substantially magnifies the risk of finding spurious correlations.

Eagle agrees that “you don’t get good scientific output from throwing everything against the wall and seeing what sticks.” No matter how much data exists, researchers still need to ask the right questions to create a hypothesis, design a test, and use the data to determine whether that hypothesis is true. He sees two looming challenges in data science. First, there aren’t enough people comfortable dealing with petabytes of data. “These skill sets need to get out of the computer-science departments and into public health, social science, and public policy,” he says. “Big data is having a transformative impact across virtually all academic disciplines—it is time for data science to be integrated into the foundational courses for all undergraduates.”

Safeguarding data is his other major concern, because “the privacy implications are profound.” Typically, the owners of huge datasets are very nervous about sharing even anonymized, population-level information like the call records Eagle uses. For the companies that hold it, he says, “There is a lot of downside to making this data open to researchers. We need to figure out ways to mitigate that concern and craft data-usage policies in ways that make these large organizations more comfortable with sharing these data, which ultimately could improve the lives of the millions of people who are generating it—and the societies in which they are living.”

John Quackenbush, an HSPH professor of computational biology and bioinformatics, shares Eagle’s twin concerns. But in some realms of biomedical big data, he says, the privacy problem is not easily addressed. “As soon as you touch genomic data, that information is fundamentally identifiable,” he explains. “I can erase your address and Social Security number and every other identifier, but I can’t anonymize your genome without wiping out the information that I need to analyze.” Privacy in such cases is achieved not through anonymity but by consent paired with data security: granting access only to authorized researchers. Quackenbush is currently collaborating with a dozen investigators—from HSPH, the Dana-Farber Cancer Institute, and a group from MIT’s Lincoln Labs expert in security—to develop methods to address a wide range of biomedical research problems using big data, including privacy.

He is also leading the development of HSPH’s new master’s program in computational biology and quantitative genetics, which is designed to address the extraordinary complexity of biomedical data. As Quackenbush puts it, “You are not just you. You have all this associated health and exposure information that I need in order to interpret your genomic information.”

A primary goal, therefore, is to give students practical skills in the collection, management, analysis, and interpretation of genomic data in the context of all this other health information: electronic medical records, public-health records, Medicare information, and comprehensive-disease data. The program is a joint venture between biostatistics and the department of epidemiology.

Really Big Data

LIKE EAGLE, Quackenbush came to public health from another discipline—in his case, theoretical and high-energy experimental physics. He first began working outside his doctoral field in 1992, when biologists for the Human Genome Project realized they needed people accustomed to collecting, analyzing, managing, and interpreting huge datasets. Physicists have been good at that for a long time.

The first full human genome sequence took five to 15 years to complete, and cost $1 billion to $3 billion (“Depending on whom you ask,” notes Quackenbush). By 2009, eight years later, the cost had dropped to $100,000 and took a year. At that point, says Quackenbush, “if my wife had a rare, difficult cancer, I would have mortgaged our house to sequence her genome.” Now a genome sequence takes a little more than 24 hours and costs about $1,000—the point at which it can be paid for “on a credit card. That simple statement alone,” he says, “underscores why the biomedical sciences have become so data-driven.

“We each carry two copies of the human genome—one from our mother and one from our father—that together comprise 6 billion base pairs,” Quackenbush continues, “a number equivalent to all the seconds in 190 years.” But knowledge of what all the genes encoded in the genome do and how they interact to influence health and disease remains woefully incomplete. To discover that, scientists will have to take genomic data and “put it in the context of your health. And we’ll have to take you and put you in the context of the population in which you live, the environmental factors you are exposed to, and the people you come in contact with—so as we look at the vast amount of data we can generate on you, the only way we can effectively interpret it is to put it in the context of the vast amount of data we can generate on almost everything related to you, your environment, and your health. We are moving from a big data problem to a really big data problem.”

Curtis Huttenhower, an HSPH associate professor of computational biology and bioinformatics, is one of Quackenbush’s really big data collaborators. He studies the function of the human microbiome, the bacteria that live in and on humans, principally in the gut, helping people extract energy from food and maintaining health. “There are 100 times more genes in the bugs than in a human’s genome,” he reports, and “it’s not unusual for someone to share 50 percent or less of their microbes with other people. Because no one has precisely the same combination of gut bacteria, researchers are still learning how those bacteria distinguish us from each other; meanwhile both human and microbial genetic privacy must be maintained.” Not only do microbiome studies confront 100 times more information per human subject than genome studies, that 100 is different from person to person and changes slowly over time with age—and rapidly, as well, in response to factors like diet or antiobiotics. Deep sequencing of 100 people during the human microbiome project, Huttenhower reports, yielded a thousand human genomes’ worth of sequencing data—“and we could have gotten more. But there is still no comprehensive catalog of what affects the microbiome,” says Huttenhower. “We are still learning.”

Recently, he has been studying microbes in the built environment: from the hangstraps of Boston’s transit system to touchscreen machines and human skin. The Sloan Foundation, which funded the project, wants to know what microbes are there and how they got there. Huttenhower is interested in the dynamics of how entire communities of bugs are transferred from one person to another and at what speed. “Everyone tends to have a slightly different version of Helicobacter pylori, a bacterium that can cause gastric cancer and is transmitted vertically from parents to children,” he says. “But what other portions of the microbiome are mostly inherited, rather than acquired from our surroundings? We don’t know yet.” As researchers learn more about how the human genome and the microbiome interact, it might become possible to administer probiotics or more targeted antibiotics to treat or prevent disease. That would represent a tremendous advance in clinical practice because right now, when someone takes a broad spectrum antibiotic, it is “like setting off a nuke,” say Huttenhower. “They instantly change the shape of the microbiome for a few weeks to months.” Exactly how the microbiome recovers is not known.

A major question in microbiome studies involves the dynamics of coevolution: how the bugs evolved in humans over hundreds of thousands of years, and whether changes in the microbiome might be linked to ailments that have become more prevalent recently, such as irritable bowel disease, allergies, and metabolic syndrome (a precursor to diabetes). Because of the timescale of the change in the patterns of these ailments, the causes can’t be genomic, says Huttenhower. “They could be environmental, but the timescale is also right for the kinds of ecological changes that would be needed in microbial communities,” which can change on scales ranging from days to decades.

“Just think about the number of things that have changed in the past 50 years that affect microbes,” he continues. Commercial antibiotics didn’t exist until about 50 years ago; our locations have changed; and over a longer period, we have gone from 75 percent of the population working in agriculture to 2 percent; our exposure to animals has changed; our exposure to the environment; our use of agricultural antibiotics has changed; what we eat has changed; the availability of drugs has changed. There are so many things that are different over that timescale that would specifically affect microbes. That is why there is some weight given to the microbiome link to the hygiene hypothesis”—the theory that lack of early childhood exposure to a diverse microbiota has led to widespread problems in the establishment of healthy immune systems.

Understanding the links between all these effects will involve data analysis that will dwarf the human genome project and become the work of decades. Like Gary King, Huttenhower favors a good algorithm over a big computer when tackling such problems. “We prefer to build models or methods that are efficient enough to run on a[n entry-level] server. But even when you are efficient, when you scale up to populations of hundreds, thousands, or tens of thousands of people,” massive computational capability is needed.

Recently, having realized that large populations of people will need to be studied to advance microbiome science, Huttenhower has begun exploring how to deploy and run his models to Amazon’s cloud—thousands of linked computers running in parallel. Amazon has teamed with the National Institutes of Health to donate server time for such studies. Says Huttenhower, “It’s an important way for getting manageable big data democratized throughout the research community.”

Discerning Patterns in Complexity

MAKING SENSE of the relationships between distinct kinds of information is another challenge facing researchers. What insights can be gleaned from connecting gene sequences, health records, and environmental influences? And how can humans understand the results?

One of the most powerful tools for facilitating understanding of vast datasets is visualization. Hanspeter Pfister, Wang professor of computer science and director of the Institute for Applied Computational Science, works with scientists in genomics and systems biology to help them visualize what are called high-dimensional data sets (with multiple categories of data being compared). For example, members of his group have created a visualization for use by oncologists that connects gene sequence and activation data with cancer types and stages, treatments, and clinical outcomes. That allows the data to be viewed in a way that shows which particular gene expression pattern is associated with high mortality regardless of cancer type, for example, giving an important, actionable insight for how to devise new treatments.

Pfister teaches students how to turn data into visualizations in Computer Science 109, “Data Science,” which he co-teaches with Joseph K. Blitzstein, professor of the practice in statistics. “It is very important to make sure that what we will be presenting to the user is understandable, which means we cannot show it all,” says Pfister. “Aggregation, filtering, and clustering are hugely important for us to reduce the data so that it makes sense for a person to look at.” This is a different method of scientific inquiry that ultimately aims to create systems that let humans combine what they are good at—asking the right questions and interpreting the results—with what machines are good at: computation, analysis, and statistics using large datasets. Student projects have run the gamut from the evolution of the American presidency and the distribution of tweets for competitive product analysis, to predicting the stock market and analyzing the performance of NHL hockey teams.

Pfister’s advanced students and postdoctoral fellows work with scientists who lack the data science skills they now need to conduct their research. “Every collaboration pushes us into some new, unknown territory in computer science,” he says.

The flip side of Pfister’s work in creating visualizations is the automated analysis of images. For example, he works with Knowles professor of molecular and cellular biology Jeff Lichtman, who is also Ramon y Cajal professor of arts and sciences, to reconstruct and visualize neural connections in the brain. Lichtman and his team slice brain tissue very thinly, providing Pfister’s group with stacks of high-resolution images. “Our system then automatically identifies individual cells and labels them consistently,” such that each neuron can be traced through a three-dimensional stack of images, Pfister reports. Even working with only a few hundred neurons involves tens of thousands of connections. One cubic millimeter of mouse brain represents a thousand terabytes (a petabyte) of image data.

Pfister has also worked with radioastronomers. The head teaching fellow in his data science course, astronomer Chris Beaumont, has developed software (Glue) for linking and visualizing large telescope datasets. Beaumont’s former doctoral adviser (for whom he now works as senior software developer on Glue), professor of astronomy Alyssa Goodman, teaches her own course in visualization (Empirical and Mathematical Reasoning 19, “The Art of Numbers”). Goodman uses visualization as an exploratory technique in her efforts to understand interstellar gas—the stuff of which stars are born. “The data volume is not a concern,” she says; even though a big telescope can capture a petabyte of data in a night, astronomers have a long history of dealing with large quantities of data. The trick, she says, is making sense of it all. Data visualizations can lead to new insights, she says, because “humans are much better at pattern recognition” than computers. In a recent presentation, she showed how a three-dimensional visualization of a cloud of gas in interstellar space had led to the discovery of a previously unknown cloud structure. She will often work by moving from a visualization back to math, and then back to another visualization.

Many of the visualization tools that have been created for medical imaging and analysis can be adapted for use in astronomy, she says. A former undergraduate advisee of Goodman’s, Michelle Borkin ’06, now a doctoral candidate in SEAS (Goodman and Pfister are her co-advisers), has explored cross-disciplinary uses of data-visualization techniques, and conducted usability studies of these visualizations. In a particularly successful example, she showed how different ways of displaying blood-flow could dramatically change a cardiac physician’s ability to diagnose heart disease. Collaborating with doctors and simulators in a project to model blood flow called “Multiscale Hemodynamics,” Borkin first tested a color-coded visual representation of blood flow in branched arteries built from billions of blood cells and millions of fluid points. Physicians were able to locate and successfully diagnose arterial blockages only 39 percent of the time. Using Borkin’s novel visualization—akin to a linear side-view of the patient’s arteries—improved the rate of successful diagnosis to 62 percent. Then, simply by changing the colors based on an understanding of the way the human visual cortex works, Borkin found she could raise the rate of successful diagnosis to 91 percent.

Visualization tools even have application in the study of collections, says Pfister.Professor of romance language and literatures Jeffrey Schnapp, faculty director of Harvard’s metaLAB, is currently at work on a system for translating collections metadata into readily comprehensible, information-rich visualizations. Starting with a dataset of 17,000 photographs—trivial by big data standards—from the missing paintings of the Italian Renaissance collection assembled by Bernard Berenson (works that were photographed but have subsequently disappeared), Schnapp and colleagues have created a way to explore the collection by means of the existing descriptions of objects, classifications, provenance data, media, materials, and subject tags.

The traditional use of such inventory data was to locate and track individual objects, he continues. “We are instead creating a platform that you can use to make arguments, and to study collections as aggregates from multiple angles. I can’t look at everything in the Fogg Museum’s collections even if I am Tom Lentz [Cabot director of the Harvard Art Museums], because there are 250,000 objects. Even if I could assemble them all in a single room,” Schnapp says, “I couldn’t possibly see them all.” But with a well-structured dataset, “We can tell stories: about place, time, distribution of media, shifting themes through history and on and on.” In the case of the Berenson photo collection, one might ask, “What sorts of stories does the collection tell us about the market for Renaissance paintings during Berenson’s lifetime? Where are the originals now? Do they still exist? Who took the photographs and why? How did the photo formats evolve with progress in photographic techniques?”

This type of little “big data” project makes the incomprehensible navigable and potentially understandable. “Finding imaginative, innovative solutions for creatingqualitative experiences of collections is the key to making them count,” Schnapp says. Millions of photographs in the collections of institutions such as the Smithsonian, for example, will probably never be catalogued, even though they represent the richest, most complete record of life in America. It might take an archivist half a day just to research a single one, Schnapp points out. But the photographs are being digitized, and as they come on line, ordinary citizens with local information and experience can contribute to making them intelligible in ways that add value to the collection as an aggregate. The Berenson photographs are mostly of secondary works of art, and therefore not necessarily as interesting individually as they are as a collection. They perhaps tell stories about how works were produced in studios, or how they circulated. Visualizations of the collection grouped by subject are telling, if not surprising. Jesus represents the largest portion, then Mary, and so on down to tiny outliers, such as a portrait of a woman holding a book, that raise rich questions for the humanities, even though a computer scientist might regard them as problems to fix. “We’re on the culture side of the divide,” Schnapp says, “so we sometimes view big data from a slightly different angle, in that we are interested in the ability to zoom between the micro level of analysis (an individual object), the macro level (a collection), and the massively macro (multiple collections) to see what new knowledge allows you to expose, and the stories it lets you tell.”

• • •

DATA, IN THE FINAL ANALYSIS, are evidence. The forward edge of science, whether it drives a business or marketing decision, provides an insight into Renaissance painting, or leads to a medical breakthrough, is increasingly being driven by quantities of information that humans can understand only with the help of math and machines. Those who possess the skills to parse this ever-growing trove of information sense that they are making history in many realms of inquiry. “The data themselves, unless they are actionable, aren’t relevant or interesting,” is Nathan Eagle’s view. “What is interesting,” he says, “is what we can now do with them to make people’s lives better.” John Quackenbush says simply: “From Kepler using Tycho Brahe’s data to build a heliocentric model of the solar system, to the birth of statistical quantum mechanics, to Darwin’s theory of evolution, to the modern theory of the gene, every major scientific revolution has been driven by one thing, and that is data.”

Source

Feb 07, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Conditional Risk  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Global Business By The Big Analytics – Playcast – Data Analytics Leadership Playbook Podcast by v1shal

>> Jason Carmel ( @defenestrate99 / @possible ) Leading Analytics, Data, Digital & Marketing by v1shal

>> It’s Official! Talend to Welcome Stitch to the Family! by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Big Data Made Simple – insideBIGDATA Under  Big Data

>>
 India cannot afford to ignore Data Science – Economic Times (blog) Under  Data Science

>>
 Have you heard? Hybrid cloud is the ideal IT model – Information Age Under  Hybrid Cloud

More NEWS ? Click Here

[ FEATURED COURSE]

Intro to Machine Learning

image

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most stra… more

[ FEATURED READ]

The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t

image

People love statistics. Statistics, however, do not always love them back. The Signal and the Noise, Nate Silver’s brilliant and elegant tour of the modern science-slash-art of forecasting, shows what happens when Big Da… more

[ TIPS & TRICKS OF THE WEEK]

Keeping Biases Checked during the last mile of decision making
Today a data driven leader, a data scientist or a data driven expert is always put to test by helping his team solve a problem using his skills and expertise. Believe it or not but a part of that decision tree is derived from the intuition that adds a bias in our judgement that makes the suggestions tainted. Most skilled professionals do understand and handle the biases well, but in few cases, we give into tiny traps and could find ourselves trapped in those biases which impairs the judgement. So, it is important that we keep the intuition bias in check when working on a data problem.

[ DATA SCIENCE Q&A]

Q:How to detect individual paid accounts shared by multiple users?
A: * Check geographical region: Friday morning a log in from Paris and Friday evening a log in from Tokyo
* Bandwidth consumption: if a user goes over some high limit
* Counter of live sessions: if they have 100 sessions per day (4 times per hour) that seems more than one person can do

Source

[ VIDEO OF THE WEEK]

@JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

 @JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Numbers have an important story to tell. They rely on you to give them a voice. – Stephen Few

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @Beena_Ammanath, @GE

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @Beena_Ammanath, @GE

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Every person in the US tweeting three tweets per minute for 26,976 years.

Sourced from: Analytics.CLUB #WEB Newsletter

Jan 31, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Convincing  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Data Science is more than Machine Learning  by analyticsweek

>> The Upper Echelons of Cognitive Computing: Deriving Business Value from Speech Recognition by jelaniharper

>> Talend and Splunk: Aggregate, Analyze and Get Answers from Your Data Integration Jobs by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Regulating the Internet of Things – RFID Journal Under  Internet Of Things

>>
 Analytics Use Grows in Parallel with Data Volumes – Datanami Under  Analytics

>>
 Startup right in 2019: how to set up your customer experience for success – SmartCompany.com.au Under  Customer Experience

More NEWS ? Click Here

[ FEATURED COURSE]

Process Mining: Data science in Action

image

Process mining is the missing link between model-based process analysis and data-oriented analysis techniques. Through concrete data sets and easy to use software the course provides data science knowledge that can be ap… more

[ FEATURED READ]

Introduction to Graph Theory (Dover Books on Mathematics)

image

A stimulating excursion into pure mathematics aimed at “the mathematically traumatized,” but great fun for mathematical hobbyists and serious mathematicians as well. Requiring only high school algebra as mathematical bac… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:You are compiling a report for user content uploaded every month and notice a spike in uploads in October. In particular, a spike in picture uploads. What might you think is the cause of this, and how would you test it?
A: * Halloween pictures?
* Look at uploads in countries that don’t observe Halloween as a sort of counter-factual analysis
* Compare uploads mean in October and uploads means with September: hypothesis testing

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek #FutureOfData with Robin Thottungal(@rathottungal), Chief Data Scientist at @EPA

 @AnalyticsWeek #FutureOfData with Robin Thottungal(@rathottungal), Chief Data Scientist at @EPA

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It is a capital mistake to theorize before one has data. Insensibly, one begins to twist the facts to suit theories, instead of theories to

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Retailers who leverage the full power of big data could increase their operating margins by as much as 60%.

Sourced from: Analytics.CLUB #WEB Newsletter

Key Considerations for Converting Legacy ETL to Modern ETL

Recently, there has been a surge in our customers who want to move away from legacy data integration platforms to adopting Talend as their one-stop shop for all their integration needs. Some of the organizations have thousands of legacy ETL jobs to convert to Talend before they are fully operational. The big question that lurks in everyone’s mind is how to get past this hurdle.

Defining Your Conversion Strategy

To begin with, every organization undergoing such a change needs to focus on three key aspects:

  1. Will the source and/or target systems change? Is this just an ETL conversion from their legacy system to modern ETL like Talend?
  2. Is the goal to re-platform as well? Will the target system change?
  3. Will the new platform reside on the cloud or continue to be on-premise?

This is where Talend’s Strategic Services can help carve a successful conversion strategy and implementation roadmap for our customers. In the first part of this three-blog series, I will focus on the first aspect of conversion.

Before we dig into it, it’s worthwhile to note a very important point – the architecture of the product itself. Talend is a JAVA code generator and unlike its competitors (where the code is migrated from one environment to the other) Talend actually builds the code and migrates built binaries from one environment to the other. In many organizations, it takes a few sprints to fully acknowledge this fact as the architects and developers are used to the old ways of referring to code migration.

The upside of this architecture is that it helps in enabling a continuous integration environment that was not possible with legacy tools. A complete architecture of Talend’s platform not only includes the product itself, but also includes third-party products such as Jenkins, NEXUS – artifact repository and a source control repository like GIT. Compare this to a JAVA programming environment and you can clearly see the similarities. In short, it is extremely important to understand that Talend works differently and that’s what sets it apart from the rest in the crowd.

Where Should You Get Started?

Let’s focus on the first aspect, conversion. Assuming that nothing else changes except for the ETL jobs that integrate, cleanse, transform and load the data, it makes it a lucrative opportunity to leverage a conversion tool – something that ingests legacy code and generates Talend code. It is not a good idea to try and replicate the entire business logic of all ETL jobs manually as there will be a great risk of introducing errors leading to prolonged QA cycles. However, just like anyone coming from a sound technical background, it is also not a good idea to completely rely on the automated conversion process itself since the comparison may not always be apples to apples. The right approach is to use the automated conversion process as an accelerator with some manual interventions.

Bright minds bring in success. Keeping that mantra in mind, first build your team:

  • Core Team – Identify architects, senior developers and SMEs (data analysts, business analysts, people who live and breathe data in your organization)
  • Talend Experts – Bring in experts of the tool so that they can guide you and provide you with the best practices and solutions to all your conversion related effort. Will participate in performance tuning activities
  • Conversion Team – A team that leverages a conversion tool to automate the conversion process. A solid team with a solid tool and open to enhancing the tool along the way to automate new designs and specifications
  • QA Team – Seasoned QA professionals that help you breeze through your QA testing activities

Now comes the approach – Follow this approach for each sprint:

Categorize 

Analyze the ETL jobs and categorize them depending on the complexity of the jobs based on functionality and components used. Some good conversion tools provide analyzers that can help you determine the complexity of the jobs to be converted. Spread a healthy mix of varying complexity jobs across each sprint.

Convert 

Leverage a conversion tool to automate the conversion of the jobs. There are certain functionalities such as an “unconnected lookup” that can be achieved through an innovative method in Talend. Seasoned conversion tools will help automate such functionalities

Optimize

Focus on job design and performance tuning. This is your chance to revisit design, if required, either to leverage better component(s) or to go for a complete redesign. Also focus on performance optimization. For high-volume jobs, you could increase the throughput and performance by leveraging Talend’s big data components, it is not uncommon for us to see that we end up completely redesigning a converted Talend Data Integration job to a Talend Big Data job to drastically improve performance. Another feather in our hat where you can seamlessly execute standard data integration jobs alongside big data jobs.

Complete 

Unit test and ensure all functionalities and performance acceptance criteria are satisfied before handing over the job to QA

QA 

An automated QA approach to compare result sets produced by the old set of ETL jobs and new ETL jobs. At the least, focus on:

  • Compare row counts from the old process to that of the new one
  • Compare each data element loaded by the load process to that of the new one
  • Verify “upsert” and “delete” logic work as expected
  • Introduce an element of regression testing to ensure fixes are not breaking other functionalities
  • Performance testing to ensure SLAs are met

Now, for several reasons, there can be instances where one would need to design a completely new ETL process for a certain functionality in order to continue processing data in the same way as before. For such situations, you should leverage the “Talend Experts” team that not only liaisons with the team that does the automated conversion but also works closely with the core team to ensure that, in such situations, the best solution is proposed which is then converted to a template and provided to the conversion team who can then automate the new design into the affected jobs.

As you can see, these activities can be part of the “Categorize” and “Convert” phases of the approach.

Finally, I would suggest chunking the conversion effort into logical waves. Do not go for a big bang approach since the conversion effort could be a lengthy one depending on the number of legacy ETL jobs in an organization.

Conclusion:

This brings me to the end of the first part of the three-blog series. Below are the five key takeaways of this blog:

  1. Define scope and spread the conversion effort across multiple waves
  2. Identify core team, Talend experts, a solid conversion team leveraging a solid conversion tool and seasoned QA professionals
  3. Follow an iterative approach for the conversion effort
  4. Explore Talend’s big data capabilities to enhance performance
  5. Innovate new functionalities, create templates and automate the conversion of these functionalities

Stay tuned for the next two!!

The post Key Considerations for Converting Legacy ETL to Modern ETL appeared first on Talend Real-Time Open Source Data Integration Software.

Source: Key Considerations for Converting Legacy ETL to Modern ETL

Jan 24, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Accuracy check  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> January 30, 2017 Health and Biotech analytics news roundup by pstein

>> Oct 18, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> How will social media analytics bring your business closer to success? by thomassujain

Wanna write? Click Here

[ NEWS BYTES]

>>
 8 common questions from aspiring data scientists, answered – Tech in Asia Under  Data Scientist

>>
 D-Link Camera Poses Data Security Risk, Consumer Reports Finds … – ConsumerReports.org Under  Data Security

>>
 Cyber Security – KSNF/KODE – FourStatesHomepage.com Under  cyber security

More NEWS ? Click Here

[ FEATURED COURSE]

Process Mining: Data science in Action

image

Process mining is the missing link between model-based process analysis and data-oriented analysis techniques. Through concrete data sets and easy to use software the course provides data science knowledge that can be ap… more

[ FEATURED READ]

The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t

image

People love statistics. Statistics, however, do not always love them back. The Signal and the Noise, Nate Silver’s brilliant and elegant tour of the modern science-slash-art of forecasting, shows what happens when Big Da… more

[ TIPS & TRICKS OF THE WEEK]

Strong business case could save your project
Like anything in corporate culture, the project is oftentimes about the business, not the technology. With data analysis, the same type of thinking goes. It’s not always about the technicality but about the business implications. Data science project success criteria should include project management success criteria as well. This will ensure smooth adoption, easy buy-ins, room for wins and co-operating stakeholders. So, a good data scientist should also possess some qualities of a good project manager.

[ DATA SCIENCE Q&A]

Q:Give examples of bad and good visualizations?
A: Bad visualization:
– Pie charts: difficult to make comparisons between items when area is used, especially when there are lots of items
– Color choice for classes: abundant use of red, orange and blue. Readers can think that the colors could mean good (blue) versus bad (orange and red) whereas these are just associated with a specific segment
– 3D charts: can distort perception and therefore skew data
– Using a solid line in a line chart: dashed and dotted lines can be distracting

Good visualization:
– Heat map with a single color: some colors stand out more than others, giving more weight to that data. A single color with varying shades show the intensity better
– Adding a trend line (regression line) to a scatter plot help the reader highlighting trends

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Keynote: The CMO isn't satisfied: Judah Phillips

 @AnalyticsWeek Keynote: The CMO isn’t satisfied: Judah Phillips

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

I’m sure, the highest capacity of storage device, will not enough to record all our stories; because, everytime with you is very valuable da

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

100 terabytes of data uploaded daily to Facebook.

Sourced from: Analytics.CLUB #WEB Newsletter

Do Detractors Really Say Bad Things about a Company?

https://www.geekwire.com/2015/the-xfinity-diet-how-i-slashed-my-monthly-cable-bill-and-barely-noticed-the-difference/

https://www.geekwire.com/2015/the-xfinity-diet-how-i-slashed-my-monthly-cable-bill-and-barely-noticed-the-difference/Can you think of a bad experience you had with a company?

Did you tell a friend about the bad experience?

Negative word of mouth can be devastating for company and product reputation. If companies can track it and do something to fix the problem, the damage can be contained.

This is one of the selling points of the Net Promoter Score. That is, customers who rate companies low on a 0 to 10 scale (6 and below) are dubbed “Detractors” because they‘re more likely spreading negative word of mouth and discouraging others from buying from a company. Companies with too much negative word of mouth would be unable to grow as much as others that have more positive word of mouth.

But is there any evidence that low scorers are really more likely to say bad things?

Is the NPS Scoring Divorced from Reality?

There is some concern that these NPS designations are divorced from reality. That is, there’s no evidence (or reason) for detractors being classified as 0 to 6 and promoters being 9-10. If these designations are indeed arbitrary or make no sense, then it’s indeed concerning. (See the tweet comment from a vocal critic in Figure 1.)

Figure 1 Validity of NPS designations

Figure 1: Example of a concern being expressed about the validity of the NPS designations.

To look for evidence of the designations, I re-read the 2003 HBR article by Fred Reichheld that made the NPS famous. Reichheld does mention that the reason for the promoter classification is customer referral and repurchase rates but doesn’t provide a lot of detail (not too surprising given it’s an HBR article) or mention the reason for detractors here.

Figure 2 Quote HBR article

Figure 2: Quote from the HBR article “Only Number You Need to Grow,” showing the justification for the designation of detractors, passives, and promoters.

In his 2006 book, The Ultimate Question, Reichheld further explains the justification for the cutoff of detractors, passives, and promoters. In analyzing several thousand comments, he reported that 80% of the Negative Word of Mouth comments came from those who responded from 0 to 6 on the likelihood to recommend item (pg 30). He further reiterated the claim that 80% of the customer referrals came from promoters (9s and 10s).

Contrary to at least one prominent UX voice on social media, there is some evidence and justification for the designations. It’s based on referral and repurchase behaviors and the sharing of negative comments. This might not be enough evidence to convince people (and certainly not dogmatic critics) to use these designations though. It would be good to find corroborating data.

The Challenges with Purchases and Referrals

Corroborating the promoter designation means finding purchases and referrals. It’s not easy associating actual purchases and actual referrals with attitudinal data. You need a way to associate customer survey data with purchases and then track purchases from friends and colleagues. Privacy issues aside, even in the same company, purchase data is often kept in different (and guarded) databases making associations challenging. It was something I dealt with constantly while at Oracle.

What’s more, companies have little incentive to share repurchase rates and survey data with outside firms and third parties may not have access to actual purchase history. Instead, academics and researchers often rely on reported purchases and reported referrals, which may be less accurate than records of actual purchases and actual referrals (a topic for an upcoming article). It’s nonetheless common in the Market Research literature to rely on stated past behavior as a reasonable proxy for actual behavior. We’ll also address purchases and referrals in a future article.

Collecting Word-of-Mouth Comments

But what about the negative comments used to justify the cutoff between detractors and passives? We wanted to replicate Reichheld’s findings that detractors accounted for a substantial portion of negative comments using another dataset to see whether the pattern held.

We looked at open-ended comments we collected from about 500 U.S. customers regarding their most recent experiences with one of nine prominent brands and products. We collected the data ourselves from an online survey in November 2017. It included a mix of airlines, TV providers, and digital experiences. In total, we had 452 comments regarding the most recent experience with the following brands/products:

  • American Airlines
  • Delta Airlines
  • United Airlines
  • Comcast
  • DirecTV
  • Dish Network
  • Facebook
  • iTunes
  • Netflix

Participants in the survey also answered the 11-point Likelihood to Recommend question, as well as a 10-point and 5-point version of the same question.

Coding the Sentiments

The open-ended comments were coded into sentiments from two independent evaluators. Negative comments were coded -1, neutral 0, and positive 1. During the coding process, the evaluators didn’t have access to the raw LTR scores (0 to 10) or other quantitative information.

In general, there was good agreement between the evaluators. The correlation between sentiment scores was high (r = .83) and they agreed 82% of the time on scores. On the remaining 18% where there was disagreement, differences were reconciled, and a sentiment was selected.

Most comments were neutral (43%) or positive (39%), with only 21% of the comments being coded as negative.

Examples of positive comments

“I flew to Hawaii for vacation, the staff was friendly and helpful! I would recommend it to anyone!”—American Airlines Customer

“I love my service with Dish network. I use one of their affordable plans and get many options. I have never had an issue with them, and they are always willing to work with me if something has financially changed.”—Dish Network Customer

Examples of neutral comments

“I logged onto Facebook, checked my notifications, scrolled through my feed, liked a few things, commented on one thing, and looked at some memories.”—Facebook User

“I have a rental property and this is the current TV subscription there. I access the site to manage my account and pay my bill.”—DirecTV User

Examples of negative comments

“I took a flight back from Boston to San Francisco 2 weeks ago on United. It was so terrible. My seat was tiny and the flight attendants were rude. It also took forever to board and deboard.”—United Airlines Customer

“I do not like Comcast because their services consistently have errors and we always have issues with the internet. They also frequently try to raise prices on our bill through random fees that increase over time. And their customer service is unsatisfactory. The only reason we still have Comcast is because it is the best option in our area.”—Comcast Customer

Associating Sentiments to Likelihood to Recommend (Qual to Quant)

We then associated each coded sentiment with the 0 to 10 values on the Likelihood to Recommend item provided by the respondent. Figure 3 shows this relationship.

Figure 3: Likelihood to Recommend

Figure 3: Percent of positive or negative comments associated with each LTR score from 0 to 10.

For example, 24% of all negative comments were associated with people who gave a 0 on the Likelihood to Recommend scale (the lowest response option). In contrast, 35% of positive comments were associated with people who scored the maximum 10 (most likely to recommend). This is further evidence for the extreme responder effect we’ve discussed in an earlier article.

You can see a pattern: As the score increases from 0 to 10, the percent of negative comments go down (r = -.71) and the percent of positive comments go up (r = .87). There isn’t a perfect linear relationship between comment sentiment and scores (otherwise the correlation would be r =1). For example, the percent of positive comments is actually higher at responses of 8 than 9 and the percent of negative comments is actually higher at 5 than 4 (possibly an artifact of this sample size). Nonetheless, this relationship is very strong.

Detractor Threshold Supported

What’s quite interesting from this analysis is that at a score of 6, the ratio of positive to negative comments flips. Respondents with scores above a 6 (7s-10s) are more likely to make positive comments about their most recent experience. Respondents who scored their Likelihood to Recommend at 6 and below are more likely to make negative comments (spread negative word of mouth) about their most recent experience.

At a score of 6, a participant is about 70% more likely to make a negative comment than a positive comment (10% vs 6% respectively). As scores go lower, the ratio goes up dramatically. At a score of 5, participants are more than three times as likely to make a negative comment as a positive comment. At a score of 0, customers are 42 times more likely to make a negative rather than a positive comment (0.6% vs. 24% respectively).

When aggregating the raw scores into promoters, passives, and detractors, we can see that a substantial 90% of negative comments are associated with detractors (0 to 6s). This is shown in Figure 4.

The positive pattern is less pronounced, but still a majority (54%) of positive comments are associated with promoters (9s and 10s). It’s also interesting to see that the passives (7s and 8s) have a much more uniform chance of making a positive, neutral, or negative comment.

This corroborates the data from Reichheld, which showed 80% of negative comments were associated with those who scored 0 to 6. He didn’t report the percent of positive comments with promoters and didn’t associate the responses to each scale point as we did here (you’re welcome).

Figure 4: Percent of positive and negative comments

Figure 4: Percent of positive or negative comments associated with each NPS classification.

If your organization uses a five-point Likelihood to Recommend scale (5 = extremely likely and 1 = not at all likely), there are similar patterns, albeit on a more compressed scale (see Figure 5 ). At a response of 3, the ratio of positive to negative comments also flips—making responses 3 or below also good designations for detractors. At a score of 3, a customer is almost four times as likely to make a negative comment about their experience than a positive comment.

Figure 5: Percent positive or negative comments LTR

Figure 5: Percent of positive or negative comments associated with each LTR score from 1 to 5 (for companies that use a 5-point scale).

Summary & Takeaways

An examination of 452 open-ended comments about customers most recent experience with nine prominent brands and products revealed:

  • Detractors accounted for 90% of negative comments. This independent evaluation corroborates the earlier analysis by Reichheld that found detractors accounted for a majority of negative word-of-mouth comments. This smaller dataset actually found a higher percentage of negative comments associated with 0 to 6 responses than Reichheld reported.
  • Six is a good threshold for identifying negative comments. The probability a comment will be negative (negative word of mouth) starts to exceed positive comment probability at 6 (on the 11-point LTR scale) and 3 (on a 5-point scale). Researchers looking at LTR scores alone can use this threshold to provide some idea about the probability of the customer sentiment about their most recent experience.
  • Repurchase and referral rates need to be examined. This analysis didn’t examine the relationship between referrals or repurchases (reported and observed) and likelihood to recommend, a topic for future research to corroborate the promoter designation.
  • Results are for specific brands used. In this analysis, we selected a range of brands and products we expected to represent a good range of NPS scores (from low to high). Future analyses can examine whether the pattern of scores at 6 or below correspond to negative sentiment in different contexts (e.g. for the most recent purchase) or for other brands/products/websites.
  • Think probabilistically. This analysis doesn’t mean a customer who gave a score of 6 or below necessarily had a bad experience or will say bad things about a company. Nor does it mean that a customer who gives a 9 or 10 necessarily had a favorable experience. You should think probabilistically about UX measures in general and NPS too. That is, it’s more likely (higher probability) that as scores go down on the Likelihood to Recommend item, the chance someone will be saying negative things goes up (but doesn’t guarantee it).
  • Examine your relationships between scores and comments. Most companies we work with have a lot of NPS data associated with verbatim comments. Use the method of coding sentiments described here to see how well the detractor designation matches sentiment and, if possible, see how well the promoter designations correspond with repurchase and referral rates or other behavioral measures (and consider sharing your results!).
  • Take a measured approach to making decisions. Many aspects of measurement aren’t intuitive and it’s easy to dismiss what we don’t understand or are skeptical about. Conversely, it’s easy to accept what’s “always been done” or published in high profile journals. Take a measured approach to deciding what’s best (including on how to use the NPS). Don’t blindly accept programs that claim to be revolutionary without examining the evidence. And don’t be quick to toss out the whole system because it has shortcomings or is over-hyped (we’d have to toss out a lot of methods and authors if this were the case). In all cases, look for corroborating evidence…probably something more than what you find on Twitter.

Source: Do Detractors Really Say Bad Things about a Company? by analyticsweek