Do Detractors Really Say Bad Things about a Company?

https://www.geekwire.com/2015/the-xfinity-diet-how-i-slashed-my-monthly-cable-bill-and-barely-noticed-the-difference/

https://www.geekwire.com/2015/the-xfinity-diet-how-i-slashed-my-monthly-cable-bill-and-barely-noticed-the-difference/Can you think of a bad experience you had with a company?

Did you tell a friend about the bad experience?

Negative word of mouth can be devastating for company and product reputation. If companies can track it and do something to fix the problem, the damage can be contained.

This is one of the selling points of the Net Promoter Score. That is, customers who rate companies low on a 0 to 10 scale (6 and below) are dubbed “Detractors” because they‘re more likely spreading negative word of mouth and discouraging others from buying from a company. Companies with too much negative word of mouth would be unable to grow as much as others that have more positive word of mouth.

But is there any evidence that low scorers are really more likely to say bad things?

Is the NPS Scoring Divorced from Reality?

There is some concern that these NPS designations are divorced from reality. That is, there’s no evidence (or reason) for detractors being classified as 0 to 6 and promoters being 9-10. If these designations are indeed arbitrary or make no sense, then it’s indeed concerning. (See the tweet comment from a vocal critic in Figure 1.)

Figure 1 Validity of NPS designations

Figure 1: Example of a concern being expressed about the validity of the NPS designations.

To look for evidence of the designations, I re-read the 2003 HBR article by Fred Reichheld that made the NPS famous. Reichheld does mention that the reason for the promoter classification is customer referral and repurchase rates but doesn’t provide a lot of detail (not too surprising given it’s an HBR article) or mention the reason for detractors here.

Figure 2 Quote HBR article

Figure 2: Quote from the HBR article “Only Number You Need to Grow,” showing the justification for the designation of detractors, passives, and promoters.

In his 2006 book, The Ultimate Question, Reichheld further explains the justification for the cutoff of detractors, passives, and promoters. In analyzing several thousand comments, he reported that 80% of the Negative Word of Mouth comments came from those who responded from 0 to 6 on the likelihood to recommend item (pg 30). He further reiterated the claim that 80% of the customer referrals came from promoters (9s and 10s).

Contrary to at least one prominent UX voice on social media, there is some evidence and justification for the designations. It’s based on referral and repurchase behaviors and the sharing of negative comments. This might not be enough evidence to convince people (and certainly not dogmatic critics) to use these designations though. It would be good to find corroborating data.

The Challenges with Purchases and Referrals

Corroborating the promoter designation means finding purchases and referrals. It’s not easy associating actual purchases and actual referrals with attitudinal data. You need a way to associate customer survey data with purchases and then track purchases from friends and colleagues. Privacy issues aside, even in the same company, purchase data is often kept in different (and guarded) databases making associations challenging. It was something I dealt with constantly while at Oracle.

What’s more, companies have little incentive to share repurchase rates and survey data with outside firms and third parties may not have access to actual purchase history. Instead, academics and researchers often rely on reported purchases and reported referrals, which may be less accurate than records of actual purchases and actual referrals (a topic for an upcoming article). It’s nonetheless common in the Market Research literature to rely on stated past behavior as a reasonable proxy for actual behavior. We’ll also address purchases and referrals in a future article.

Collecting Word-of-Mouth Comments

But what about the negative comments used to justify the cutoff between detractors and passives? We wanted to replicate Reichheld’s findings that detractors accounted for a substantial portion of negative comments using another dataset to see whether the pattern held.

We looked at open-ended comments we collected from about 500 U.S. customers regarding their most recent experiences with one of nine prominent brands and products. We collected the data ourselves from an online survey in November 2017. It included a mix of airlines, TV providers, and digital experiences. In total, we had 452 comments regarding the most recent experience with the following brands/products:

  • American Airlines
  • Delta Airlines
  • United Airlines
  • Comcast
  • DirecTV
  • Dish Network
  • Facebook
  • iTunes
  • Netflix

Participants in the survey also answered the 11-point Likelihood to Recommend question, as well as a 10-point and 5-point version of the same question.

Coding the Sentiments

The open-ended comments were coded into sentiments from two independent evaluators. Negative comments were coded -1, neutral 0, and positive 1. During the coding process, the evaluators didn’t have access to the raw LTR scores (0 to 10) or other quantitative information.

In general, there was good agreement between the evaluators. The correlation between sentiment scores was high (r = .83) and they agreed 82% of the time on scores. On the remaining 18% where there was disagreement, differences were reconciled, and a sentiment was selected.

Most comments were neutral (43%) or positive (39%), with only 21% of the comments being coded as negative.

Examples of positive comments

“I flew to Hawaii for vacation, the staff was friendly and helpful! I would recommend it to anyone!”—American Airlines Customer

“I love my service with Dish network. I use one of their affordable plans and get many options. I have never had an issue with them, and they are always willing to work with me if something has financially changed.”—Dish Network Customer

Examples of neutral comments

“I logged onto Facebook, checked my notifications, scrolled through my feed, liked a few things, commented on one thing, and looked at some memories.”—Facebook User

“I have a rental property and this is the current TV subscription there. I access the site to manage my account and pay my bill.”—DirecTV User

Examples of negative comments

“I took a flight back from Boston to San Francisco 2 weeks ago on United. It was so terrible. My seat was tiny and the flight attendants were rude. It also took forever to board and deboard.”—United Airlines Customer

“I do not like Comcast because their services consistently have errors and we always have issues with the internet. They also frequently try to raise prices on our bill through random fees that increase over time. And their customer service is unsatisfactory. The only reason we still have Comcast is because it is the best option in our area.”—Comcast Customer

Associating Sentiments to Likelihood to Recommend (Qual to Quant)

We then associated each coded sentiment with the 0 to 10 values on the Likelihood to Recommend item provided by the respondent. Figure 3 shows this relationship.

Figure 3: Likelihood to Recommend

Figure 3: Percent of positive or negative comments associated with each LTR score from 0 to 10.

For example, 24% of all negative comments were associated with people who gave a 0 on the Likelihood to Recommend scale (the lowest response option). In contrast, 35% of positive comments were associated with people who scored the maximum 10 (most likely to recommend). This is further evidence for the extreme responder effect we’ve discussed in an earlier article.

You can see a pattern: As the score increases from 0 to 10, the percent of negative comments go down (r = -.71) and the percent of positive comments go up (r = .87). There isn’t a perfect linear relationship between comment sentiment and scores (otherwise the correlation would be r =1). For example, the percent of positive comments is actually higher at responses of 8 than 9 and the percent of negative comments is actually higher at 5 than 4 (possibly an artifact of this sample size). Nonetheless, this relationship is very strong.

Detractor Threshold Supported

What’s quite interesting from this analysis is that at a score of 6, the ratio of positive to negative comments flips. Respondents with scores above a 6 (7s-10s) are more likely to make positive comments about their most recent experience. Respondents who scored their Likelihood to Recommend at 6 and below are more likely to make negative comments (spread negative word of mouth) about their most recent experience.

At a score of 6, a participant is about 70% more likely to make a negative comment than a positive comment (10% vs 6% respectively). As scores go lower, the ratio goes up dramatically. At a score of 5, participants are more than three times as likely to make a negative comment as a positive comment. At a score of 0, customers are 42 times more likely to make a negative rather than a positive comment (0.6% vs. 24% respectively).

When aggregating the raw scores into promoters, passives, and detractors, we can see that a substantial 90% of negative comments are associated with detractors (0 to 6s). This is shown in Figure 4.

The positive pattern is less pronounced, but still a majority (54%) of positive comments are associated with promoters (9s and 10s). It’s also interesting to see that the passives (7s and 8s) have a much more uniform chance of making a positive, neutral, or negative comment.

This corroborates the data from Reichheld, which showed 80% of negative comments were associated with those who scored 0 to 6. He didn’t report the percent of positive comments with promoters and didn’t associate the responses to each scale point as we did here (you’re welcome).

Figure 4: Percent of positive and negative comments

Figure 4: Percent of positive or negative comments associated with each NPS classification.

If your organization uses a five-point Likelihood to Recommend scale (5 = extremely likely and 1 = not at all likely), there are similar patterns, albeit on a more compressed scale (see Figure 5 ). At a response of 3, the ratio of positive to negative comments also flips—making responses 3 or below also good designations for detractors. At a score of 3, a customer is almost four times as likely to make a negative comment about their experience than a positive comment.

Figure 5: Percent positive or negative comments LTR

Figure 5: Percent of positive or negative comments associated with each LTR score from 1 to 5 (for companies that use a 5-point scale).

Summary & Takeaways

An examination of 452 open-ended comments about customers most recent experience with nine prominent brands and products revealed:

  • Detractors accounted for 90% of negative comments. This independent evaluation corroborates the earlier analysis by Reichheld that found detractors accounted for a majority of negative word-of-mouth comments. This smaller dataset actually found a higher percentage of negative comments associated with 0 to 6 responses than Reichheld reported.
  • Six is a good threshold for identifying negative comments. The probability a comment will be negative (negative word of mouth) starts to exceed positive comment probability at 6 (on the 11-point LTR scale) and 3 (on a 5-point scale). Researchers looking at LTR scores alone can use this threshold to provide some idea about the probability of the customer sentiment about their most recent experience.
  • Repurchase and referral rates need to be examined. This analysis didn’t examine the relationship between referrals or repurchases (reported and observed) and likelihood to recommend, a topic for future research to corroborate the promoter designation.
  • Results are for specific brands used. In this analysis, we selected a range of brands and products we expected to represent a good range of NPS scores (from low to high). Future analyses can examine whether the pattern of scores at 6 or below correspond to negative sentiment in different contexts (e.g. for the most recent purchase) or for other brands/products/websites.
  • Think probabilistically. This analysis doesn’t mean a customer who gave a score of 6 or below necessarily had a bad experience or will say bad things about a company. Nor does it mean that a customer who gives a 9 or 10 necessarily had a favorable experience. You should think probabilistically about UX measures in general and NPS too. That is, it’s more likely (higher probability) that as scores go down on the Likelihood to Recommend item, the chance someone will be saying negative things goes up (but doesn’t guarantee it).
  • Examine your relationships between scores and comments. Most companies we work with have a lot of NPS data associated with verbatim comments. Use the method of coding sentiments described here to see how well the detractor designation matches sentiment and, if possible, see how well the promoter designations correspond with repurchase and referral rates or other behavioral measures (and consider sharing your results!).
  • Take a measured approach to making decisions. Many aspects of measurement aren’t intuitive and it’s easy to dismiss what we don’t understand or are skeptical about. Conversely, it’s easy to accept what’s “always been done” or published in high profile journals. Take a measured approach to deciding what’s best (including on how to use the NPS). Don’t blindly accept programs that claim to be revolutionary without examining the evidence. And don’t be quick to toss out the whole system because it has shortcomings or is over-hyped (we’d have to toss out a lot of methods and authors if this were the case). In all cases, look for corroborating evidence…probably something more than what you find on Twitter.

Source: Do Detractors Really Say Bad Things about a Company? by analyticsweek

Interpreting Single Items from the SUS

SUS itemsThe System Usability Scale has been around for decades and is used by hundreds of organizations globally.

The 10-item SUS questionnaire is a measure of a user’s perception of the usability of a “system.”

A system can be just about anything a human interacts with: software apps (business and consumer), hardware, mobile devices, mobile apps, websites, or voice user interfaces.

The SUS questionnaire is scored by combining the 10 items into a single SUS score ranging from 0 to 100. From its creation, though, John Brooke cautioned against interpreting individual items:

“Note that scores for individual items are not meaningful on their own”~John Brooke.

Brooke’s caution against examining scores for the individual items of the SUS was appropriate at the time. After all, he was publishing a “quick and dirty” questionnaire with analyses based on data from 20 people.

There is a sort of conventional wisdom that multiple items are superior to single items and in fact, single item measures and analysis are often dismissed in peer-reviewed journals.

More items will by definition increase the internal consistency reliability of a questionnaire when measured using Cronbach’s alpha. In fact, you can’t measure internal consistency reliability with only one item. However, other methods measure reliability, including test-retest reliability. Single measures, such as satisfaction, brand attitude, task ease, and likelihood to recommend, also exhibit sufficient test-retest reliability and little if anything may be gained by using multiple items.

SUS Benchmarks

John Brooke didn’t publish any benchmarks or guidance for what makes a “good” SUS score. But because the SUS has been used extensively by other researchers who have published the results, we have been able to derive a database of scores. Table 1 shows SUS grades and percentiles that Jim Lewis and I put together from that database, which itself is an adaptation of work from Bangor and Kortum.

Grade SUS Percentile Range
A+ 84.1-100 96-100
A 80.8-84.0 90-95
A- 78.9-80.7 85-89
B+ 77.2-78.8 80-84
B 74.1 – 77.1 70 – 79
B- 72.6-74.0 65-69
C+ 71.1-72.5 60-64
C 65.0-71.0 41-59
C- 62.7-64.9 35-40
D 51.7-62.6 15-34
F 0-51.6 0-14

Table 1: SUS scores, grades, and percentile ranks.

To use the table, find your raw SUS score in the middle column and then find its corresponding grade in the left column and percentile rank in the right column. For example, a SUS score of 75 is a bit above the global average of 68 and nets a “B” grade. A SUS score below 50 puts it in the “F” grade with a percentile rank among the worst interfaces (worse than 86% or better than only 14%).

Why Develop Item-Level Benchmarks?

While the SUS provides an overall measure of perceived ease and our grading scale provides a way to interpret the raw score, researchers may want to measure and set targets for other more specific experience attributes (e.g. perceptions of findability, complexity, consistency, and confidence). To do so, researchers would need to develop specific items to measure those more specific attributes.

Some attributes, such as findability, do not appear in the 10 SUS items. Other attributes, such as perceived complexity (Item 2), perceived ease of use (Item 3), perceived consistency (Item 6), perceived learnability (Item 7), and confidence in use (Item 9) do appear in the SUS.

Researchers who use the SUS and who also need to assess any of these specific attributes would need to decide whether to ask participants in their studies to rate this attribute twice (once in the SUS and again using a separate item) or to use the response to the SUS item in two ways (contributing to the overall SUS score and as a measure of the specific attribute of interest). The latter, using the response to the SUS item in two ways, is the more efficient approach.

In short, using item benchmarks saves respondents time as they answer fewer items and saves researchers time as they don’t have to derive new items and get the bonus of having benchmarks to make the responses more meaningful.

Developing SUS Item Level Benchmarks

To help make the process of understanding individual SUS items better, Jim Lewis and I compiled data from 166 unpublished industrial usability studies/surveys based on scores from 11,855 individual SUS questionnaires.

We then used regression equations to predict overall SUS scores from the individual items. We found each item explained between 35% and 89% of the full SUS score (a large percentage for a single item). Full details of the regression equations and process are available in the Journal of Usability Studies article.

To make item benchmarks easy to reference, we computed the score you’d need for an average “C” score of 68 or a good score of 80, an “A-.“ Why 80? We’ve found that a SUS of 80 has become a common industrial goal. It’s also a good psychological threshold that’s attainable. Achieving a raw SUS score of 90 sounds better but is extraordinarily difficult (only one study in the database exceeded 90–data from Netflix).

Table 2 shows the mean score you would need for each item to achieve an average “C” or good “A-“ score.

SUS Item Target for Average Score Target for Good Score
1. I think that I would like to use this system frequently. ≥ 3.39 ≥ 3.80
2. I found the system unnecessarily complex. ≤ 2.44 ≤ 1.85
3. I thought the system was easy to use. ≥ 3.67 ≥ 4.24
4. I think that I would need the support of a technical person to be able to use this system. ≤ 1.85 ≤ 1.51
5. I found the various functions in this system were well integrated. ≥ 3.55 ≥ 3.96
I thought there was too much inconsistency in this system. ≤ 2.20 ≤ 1.77
7. I would imagine that most people would learn to use this system very quickly. ≥ 3.71 ≥ 4.19
8. I found the system very cumbersome to use. ≤ 2.25 ≤ 1.66
9. I felt very confident using the system. ≥ 3.72 ≥ 4.25
10. I needed to learn a lot of things before I could get going with this system. ≤ 2.09 ≤ 1.64

Table 2: Benchmarks for average and good scores for the 10 SUS items.

For example, if you’re using Item 3, “I thought the system was easy to use,” then a mean score of 3.67 would correspond to a SUS score of 68 (an average overall system score). For an above average SUS score of 80, the corresponding target for Item 3 would be a mean score of at least 4.24.

Note that due to the mixed tone of the SUS, the directionality of the item targets is different for odd- and even-numbered items. Specifically, for odd-numbered items, means need to be greater than the targets; for even-numbered items, observed means need to be less than the targets. For example, for Item 2, “I found the system unnecessarily complex,” you would want to have a mean below 2.44 to achieve an average score (SUS equivalent of 68) and below 1.85 for a good score (SUS equivalent of 80).

Summary

The popularity of the SUS has allowed for the creation of normalized databases and guidance on what constitutes poor, good, or excellent scores. Researchers on some occasions may want to use single items from the SUS to benchmark more specific constructs (e.g. “I felt very confident using the system” representing user confidence). Using data from almost 12,000 participants we were able to create benchmarks for individual SUS items to achieve average “C” scores and high “A-“ SUS equivalent scores. These benchmarks allow researchers to know what mean value to aim for to achieve an average or good experience when interpreting single items from the SUS.

Originally Posted at: Interpreting Single Items from the SUS

Borrowing Technology from Media & Entertainment for Big Data Analytics in the Cloud

For most of computing’s history, data meant “structured” data or data that fits neatly into pre-defined categories and rows stored in databases or spreadsheets. But the big data movement has changed all of that with the proliferation of unstructured data analysis. Unstructured data is any data that doesn’t fit into a predefined data model. It includes things like video, images, text, and all the data being logged by sensors and the myriad of digital devices. Where structured data is relatively easy to store and analyze using traditional technology, unstructured data isn’t.

None-the-less, today, massive collections of unstructured data are being analyzed for altruistic purposes like combating crime and preventing disease, but also for profit motivated goals like spotting business trends. And, as we’ve entered an era of pervasive surveillance – including aerial surveillance by drones and low earth orbit satellites capable of delivering 50 cm resolution imagery – media content (photos, videos and audio) are more relevant to big data analytics than ever before.

Unstructured data tends to be vastly larger than structured data, and is mostly responsible for our crossing the threshold from regular old data to “big data.” That threshold is not defined by a specific number of terabytes or even petabytes, but by what happens when data accumulates to an amount so large that innovative techniques are required to store, analyze and move it. Public cloud computing technology is one of these innovations that’s being applied to big data analytics because it offers a virtually unlimited elastic supply of compute power, networking and storage with a pay-for-use pricing model (all of which opens up new possibilities for analyzing both unstructured and structured big data).

Before their recent and unfortunate shutdown, the respected tech news and research site GigaOM released a survey on enterprise big data. In it over 90% of participants said they planned to move more than a terabyte of data into the cloud, and 20% planned to move more than 100 TB. Cloud storage is a compelling solution as both an elastic repository for this overflowing data and a location readily accessible to cloud-based analysis.

However, one of the challenges that come with using public cloud computing and cloud storage is getting the data into the cloud in the first place. Moving large files and bulk data sets over the Internet can be very inefficient with traditional protocols like FTP and HTTP (the most common way organizations move large files, and the foundation for most options cloud storage providers offer to get your data to them besides shipping hard drives).

In that same GigaOm survey, 24% expressed concern about whether their available bandwidth can accommodate pushing their large data volumes up to the cloud, and 21% worry that they don’t have the expertise to carry out the data migration (read about all the options for moving data to any of the major cloud storage providers, and you too might be intimidated).

While bandwidth and expertise are very legitimate concerns, there are SaaS (Software as a Service) large file transfer solutions that can make optimal use of bandwidth, are very easy to use and integrate with Amazon S3, Microsoft Azure and Google Cloud. In fact, the foundation technology of these solutions was originally built to move very large media files throughout the production, post production and distribution of film and television.

Back in the early 2000’s, when the Media & Entertainment industry began actively transitioning from physical media including tape and hard drives to digital file-based workflows, they had a big data movement problem too. For companies like Disney and the BBC, sending digital media between their internal locations and external editing or broadcast partners was a serious issue. Compared to everything else moving over the Internet, those files were huge. (And broadcast masters are relatively small compared to the 4K raw camera footage being captured today. For example, an hour of raw camera footage often requires a terabyte or more of storage.)

During M&E’s transition from physical media to file-based media, companies like Signiant started developing new protocols for the fast transfer of large files over public and private IP networks, with the high security that the movie industry requires for their most precious assets. The National Academy of Television Arts and Sciences even recognized Signiant’s pioneering role with a Technology and Engineering Emmy award in 2014.

Today, that technology has evolved in step with the cloud revolution, and SaaS accelerated large file transfer technology is expanding to other industries. Far faster and more reliable than older technologies like FTP and HTTP, this solution can also be delivered as a service, so users do not have to worry about provisioning hardware and software infrastructure, including scaling and balancing servers for load peaks and valleys. The “expertise” many worry about needing is a non-issue because the solution is so simple to use. And it’s being used in particular to push large volumes to cloud storage for all kinds of time-sensitive projects, including big data analytics. For example, scientists are analyzing images of snow and ice cover to learn more about climate change, and (interesting though less benevolent) businesses are analyzing images of competitors’ parking lots — counting cars by make and model — in order to understand the shopping habits and demographics of their customers.

It’s always fascinating to see how innovation occurs. It almost never springs from nothing, but is adapted from techniques and technologies employed somewhere else to solve a different challenge. Who would have thought, at the turn of the century, that the technology developed for Media & Entertainment would be so relevant to big data scientific, government and business analytics? And that technology used to produce and delivery entertainment could be leveraged for the betterment of society?

Originally posted via “Borrowing Technology from Media & Entertainment for Big Data Analytics in the Cloud”

Originally Posted at: Borrowing Technology from Media & Entertainment for Big Data Analytics in the Cloud

Customer Loyalty Feedback Meets Customer Relationship Management

clicktoolsIn my new book, Total Customer Experience, I illustrate why three types of customer loyalty are needed to understand the different ways your customers can show their loyalty towards your company or brand. The three types of loyalty are:

  1. Retention Loyalty: likelihood of customers to stay with a company
  2. Advocacy Loyalty: likelihood of customers to recommend the company/ advocate on the company’s behalf
  3. Purchasing Loyalty: likelihood of customers to expand their relationship with the company

Using this multi-faceted model, I developed a loyalty measurement approach, referred to as the RAPID Loyalty Approach, to help companies get a more comprehensive picture of customer loyalty. Understanding the factors that impact these different types of loyalty helps companies target customer experience improvement strategies to increase different types of customer loyalty.

Data Integration

When companies are able to link these RAPID loyalty metrics with other customer information, like purchase history, campaign responses and employee/partner feedback, the customer insights become deeper. TCELab  (where I am the Chief Customer Officer) is working with Clicktools to help Salesforce customers implement the RAPID Loyalty Approach. This partnership brings together TCELab’s survey knowledge and advisory services with Clicktools’ exceptional feedback software and Salesforce integration; for the fifth consecutive year, Clicktools has received the Salesforce AppExchange™ Customer Choice Award for Best Survey App.

TCELab will include RAPID surveys in Clicktools’ survey library, available in all Clicktools editions and  integrated easilywith a RAPID Salesforce.com custom object.  Salesforce reports and dashboards, including linkage analysis will follow.  Customers can call on the expertise of TCELab for advice on tailoring the surveys for their organization and for support in analysis and reporting.

Joint Whitepaper from TCELab and Clicktools

David Jackson, founder and CEO of Clicktools, and I have co-written a whitepaper titled, “RAPID Loyalty: A Comprehensive Approach to Customer Loyalty,” to present the basic structure and benefits of the RAPID approach and to offer Clicktools customers access to a special program for getting started.

Download the Whitepaper >>

Originally Posted at: Customer Loyalty Feedback Meets Customer Relationship Management by bobehayes

Emergence of #DataOps Age – @AndyHPalmer #FutureOfData #Podcast

[youtube https://www.youtube.com/watch?v=ER9mHaWMMww]

Emergence of #DataOps Age – @AndyHPalmer #FutureOfData

Youtube: https://youtu.be/ER9mHaWMMww
iTunes: http://math.im/itunes

In this podcast @AndyPalmer from @Tamr sat with @Vishaltx from @AnalyticsWeek to talk about the emergence / need / market for Data Ops, a specialized capability emerging from merging data engineering and dev ops ecosystem due to increased convoluted data silos and complicated processes. Andy shared his journey on what some of the businesses and its leaders are doing wrong and how businesses needs to rethink their data silos to future proof themselves. This is a good podcast for any data leader thinking about cracking the code on getting high quality insights from data.

Andy’s Recommended Read:
Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker https://amzn.to/2Lc6WqK
The Three-Body Problem by Cixin Liu and Ken Liu https://amzn.to/2rQyPvp

Andy’s BIO:
Andy Palmer is a serial entrepreneur who specializes in accelerating the growth of mission-driven startups. Andy has helped found and/or fund more than 50 innovative companies in technology, health care and the life sciences. Andy’s unique blend of strategic perspective and disciplined tactical execution is suited to environments where uncertainty is the rule rather than the exception. Andy has a specific passion for projects at the intersection of computer science and the life sciences.

Most recently, Andy co-founded Tamr, a next generation data curation company and Koa Labs, a start-up club in the heart of Harvard Square, Cambridge, MA.

Specialties: Software, Sales & Marketing, Web Services, Service Oriented Architecture, Drug Discovery, Database, Data Warehouse, Analytics, Startup, Entrepreneurship, Informatics, Enterprise Software, OLTP, Science, Internet, ecommerce, Venture Capital, Bootstrapping, Founding Team, Venture Capital firm, Software companies, early stage venture, corporate development, venture-backed, venture capital fund, world-class, stage venture capital

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest and email at info@analyticsweek.com

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Originally Posted at: Emergence of #DataOps Age – @AndyHPalmer #FutureOfData #Podcast by v1shal

Discussing the World of Crypto with @JoelComm / @BadCrypto

[youtube https://youtu.be/xJucEIDitas]

Discussing the World of Crypto with @JoelComm / @BadCrypto #FutureOfData

Youtube: https://youtu.be/xJucEIDitas
iTunes: http://apple.co/2ynxopz

In this podcast Joel Comm from The Bad Crypto Podcast sat with Vishal Kumar, CEO AnalyticsWeek and discuss the World of Crypto Currencies. The discussion sheds light into the nuances in the rapidly exploding world of Crypto Currencies, some of the thinking behind the currencies. The discussion also sheds light into the opportunities and risks in the industry. Joel sheds his insights about how to think about theses currencies and long term implications of the algos that run these currencies. The podcast is a great listen for anyone who wants to understand the world of crypto currencies.

*please note, this podcast and / or its content in no ways advocate any investment advice and nor intended to generate any positive or negative influence. Crypto Currencies are highly volatile in nature and any investor must use absolute caution and care while evaluating such currencies.*

Joel’s Recommended Read:
Cryptocurrencies 101 By James Altucher http://bit.ly/2Bi5FMv

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Joel’s BIO:
As a knowledgable & inspirational speaker, Joel speaks on a variety of business and entrepreneurial topics. He presents a step-by-step playbook on how to use social media as a leveraging tool to expand the reach of your brand, increase your customer base, and create fierce brand loyalty for your business. Joel is also able to speak with authority on the various ways to harness the marketing power of technology to explode profits. He offers an inspiring yet down-to-earth call to action for those who dream of obtaining growth and financial success. As someone who went from having only 87 cents in his bank account to creating multiple successful businesses, Joel is uniquely poised to instruct and inspire when it comes to using the various forms of new media as avenues towards the greater goal of business success. He is a broadcast veteran with thousands of hours in radio, podcasting, television and online video experience. Joel is the host of two popular, yet completely different podcasts. FUN with Joel Comm features the lighter side of top business and social leaders. The Bad Crypto Podcast makes cryptocurrency and bitcoin understandable to the masses.

Joel is the New York Times best-selling author of 14 books, including The AdSense Code, Click Here to Order: Stories from the World’s Most Successful Entrepreneurs, KaChing: How to Run an Online Business that Pays and Paysm Twitter Power 3.0 and Self Employed: 50 Signs That You Might Be an Entrepreneur. He has also written over 40 ebooks. He has appeared in The New York Times, on Jon Stewart’s The Daily Show, on CNN online, on Fox News, and many other places.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Source

Nick Howe (@Area9Nick) talks about fabric of learning organization to bring #JobsOfFuture #podcast

[youtube https://www.youtube.com/watch?v=-1ZP_tbZFgI]

In this podcast Nick Howe (@NickJHowe) from @Area9Learning talks about the transforming world of learning landscape. He shed light on some of the learning challenges and some of the ways learning could match the evolving world and its learning needs. Nick sheds light on some tactical steps that businesses could adopt to create world class learning organization. This podcast is must for learning organization.

Nick’s Recommended Read:
The End of Average: Unlocking Our Potential by Embracing What Makes Us Different by Todd Rose https://amzn.to/2kiahYN
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom https://amzn.to/2IAPURg

Podcast Link:
iTunes: http://math.im/jofitunes
GooglePlay: http://math.im/jofgplay

Nick’s BIO:
Nick Howe is an award winning Chief Learning Officer and business leader with a focus on the application of innovative education technologies. He is the Chief Learning Officer at Area9 Lyceum – one of global leaders in adaptive learning technology, a Strategic Advisor to the Institute of Simulation and Training at the University of Central Florida, and board advisor to multiple EdTech startups.

For twelve years Nick was the Chief Learning Officer at Hitachi Data Systems where he built and led the corporate university and online communities serving over 50,000 employees, resellers and customers.

With over 25 years’ global sales, sales enablement, delivery and consulting experience with Hitachi, EDS Corporation and Bechtel Inc., Nick is passionate about the transformation of customer experiences, partner relationships and employee performance through learning and collaboration

About #Podcast:
#JobsOfFuture podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#JobsOfFuture #Leadership #Podcast #Future of #Work #Worker & #Workplace

Originally Posted at: Nick Howe (@Area9Nick) talks about fabric of learning organization to bring #JobsOfFuture #podcast

2018 Trends in Blockchain

If the defining characteristic of data management in 2018 is the heterogeneity of contemporary computing environments, then Blockchain is a considerable factor contributing to its decentralization.

Expectations for this distributed ledger technology are decidedly high. Its low latency, irrefutable transaction capabilities are overtaking so many verticals that one of Forrester’s Top 10 Technology Trends To Watch: 2018 to 2020 predicts that by 2019 “a viable blockchain-based market will be commercialized.”

Blockchain’s growing popularity is directly attributed to its utilitarian nature, which supersedes individual industries and use cases. It’s not just a means of revolutionizing finance via cryptocurrencies such as Bitcoin, but of implementing new security paradigms, legal measures, and data sources for Artificial Intelligence. Most importantly, it could very well herald the end of silo culture.

By helping to seamlessly connect heterogeneous databases around the world in a peer-to-peer fashion, its overall impact is projected to be “as disruptive as the internet was 20 years ago—and still is” according to Algebraix Data CEO Charlie Silver.

For it to realize this future, however, there are a number of points of standardization within and between blockchain networks which must solidify.

They should begin doing so in earnest in the coming year.

Private Blockchains, Centralized Authority
The most common use of blockchain is for validating transactions and issuing monetary value for cryptocurrencies. These are typical instances of what is known as public blockchains, in which the ledger is distributed amongst individuals or businesses for sharing and determining the integrity of transactional data. What could truly trigger the expansion of Blockchain’s adoption, however, is the growing credence associated with private blockchains. These networks extend to only members of well-defined (i.e. not open to the general public) participants, such as those in a supply chain or for some other discreet business purpose. The strength of public blockchains is largely in the lack of a central authority which adds to the indisputable nature of transactions. In private blockchains, however, that centralized authority is the key to assuring the integrity of data exchanges. “What that means is there is a blockchain orchestrator that enables the interactions between the parties, coordinates all those things, provides the governance, and then when the transaction is done…you have permanent immutability, and transparency with permissions and so on,” commented One Network SVP of Products Adeel Najmi.

Smart Contracts
In addition to providing governance standards for all parties in the network, a centralized mediator also facilitates consistency in semantics and metadata which is crucial for exchanging data. Without that centralization, blockchain communities must define their own data governance protocols, semantic standards, and Master Data Management modeling conventions. The notion of standards and the legality of exchanges between blockchain in the form of smart contracts will also come to prominence in 2018. Smart contracts involve denoting what various exchanges of data mean, what takes place when such data is transmitted, and what parties must do in agreement with one another for any variety of transactions. However, the dearth of standards for blockchain—particularly as they might apply between blockchains—leads to questions of legality of certain facets of smart contracts. According to Gartner: “Much of the legal basis for identity, trust, smart contracts, and other components are undefined in a blockchain context. Established laws still need to be revised and amended to accommodate blockchain use cases, and financial reporting is still unclear.” These points of uncertainty regarding blockchain correlate to its adoption rate, yet are to be expected for an emerging technology. Silver noted, “Like in the Internet of Things, there’s all kinds of standards debates going on. All new technologies have it.”

Artificial Intelligence
One of the more exciting developments to emerge in 2018 will be the synthesis of blockchain technologies with those for AI. There are a number of hypothetical ways in which these two technologies can influence—and aid—one another. Perhaps one of the more concrete ones is that the amounts of data involved in blockchain make excellent sources to feed the neural networks which thrive on copious big data quantities. According to Gartner VP Distinguished Analyst Whit Andrews, in this respect Blockchain’s impact on AI is similar to that of the Internet of Thing’s impact on AI. “Just like IoT, [Blockchain’s] creating a whole lot of data about different things making it possible for organizations to serve as an authority where previously they had to rely on others,” Andrews explained. “That’s where Blockchain changes everything.” In public decentralized blockchains, the newfound authorization of business partners, individuals, or companies can enable the sort of data quantities which, if properly mined, contribute to newfound insights. “So, maybe Artificial Intelligence again emerges as an exceptional way of interpreting that data stream,” Andrews remarked.

What is certain, however, is that the intersection of these two technologies is still forthcoming. Andrews indicated approximately one in 25 CIOs are employing AI today, and “the figure is similar with blockchain.” 2018 advancements related to deploying AI with Blockchain pertain to resolving blockchain’s scalability to encompass exorbitant big data amounts. Najmi observed that, “Due to scalability and query limitations of traditional blockchains it is difficult to implement intelligent sense and respond capabilities with predictive, prescriptive analytics and autonomous decision making.”

Low Latency
Blockchain is reducing the persistence of silo culture throughout data management in two fundamental ways. The first is related to its low latency. The boons of a shared network become minimized if it takes inordinate amounts of times for transactions. Granted, one of the factors in decentralized blockchains is that there is a validation period. Transactions might appear with low or no latency, but they still require validation. In private blockchains with a centralized mediator, that validation period is reduced. Nonetheless, the main way implementing blockchain reduces silos is simply by connecting databases via the same ledger system. This latter aspect of blockchain is one of the reasons it is expanding across industries such as insurance and real estate. “In the U.S. and maybe in Western Europe there is good infrastructure for finding out real estate information such as who owns what, who’s got a mortgage, etc.,” Silver said. “But 90 percent of the world doesn’t have that infrastructure. So think about all the global real estate information now being accessible, and the lack of silos. That’s the perfect use case of information getting de-siloed through Blockchain.”

Growing Influence
At this point, the potential for Blockchain likely exceeds its practical utility for information assets today. Nonetheless, with its capabilities applicable to so many different facets of data management, its influence will continue to grow throughout 2018. Those capabilities encompass significant regions of finance, transactional data, legality (via smart contracts), and AI. Adoption rates ultimately depend on the viability of the public and private paradigms—both how the latter can impact the former and vice versa. The key issue at stake with these models is the resolution of standards, semantics, and governance needed to institutionalize this technology. Once that’s done, Blockchain may provide a novel means of integrating both old and new IT systems.

“If you think about the enterprise, it’s got 20, 30 years of systems that need to interoperate,” Silver said. “Old systems don’t just die; they just find a new way to integrate into a new architecture.”

Exactly what blockchain’s role in that new architecture will be remains to be seen.

Source

Sharing R Notebooks using RMarkdown

At Databricks, we are thrilled to announce the integration of RStudio with the Databricks Unified Analytics

Unified Analytics is a new category of solutions that unify data processing with AI technologies, making AI much more achievable for enterprise organizations and enabling them to accelerate their AI initiatives. Unified Analytics makes it easier for enterprises to build data pipelines across various siloed data storage systems and to prepare labeled datasets for model building, which allows organizations to do AI on their existing data and iteratively do AI on massive data sets. Unified Analytics also provides access to a broad set of AI algorithms that can be applied to these labeled datasets iteratively to fine-tune the models. Lastly, Unified Analytics solutions also provide collaboration capabilities for data scientists and data engineers to work effectively across the entire development-to-production lifecycle. The organizations that succeed in unifying their domain data at scale and unifying that data with the best AI technologies will be the ones that succeed with AI. Want to see if Databricks’ Unified Analytics Platform can help you? Try for free today 

“>Unified Analytics Platform. You can try it out now with this RMarkdown notebook (Rmd | HTML) or visit us at www.databricks.com/rstudio.

Introduction
Databricks Unified Analytics Platform now supports RStudio Server (press release). Users often ask if they can move notebooks between RStudio and Databricks workspace using RMarkdown — the most popular dynamic R document format. The answer is yes, you can easily export any Databricks R notebook as an RMarkdown file, and vice versa for imports. This allows you to effortlessly share content between a Databricks R notebook and RStudio, combining the best of both environments.

What is RMarkdown
RMarkdown is the dynamic document format RStudio uses. It is normal Markdown plus embedded R (or any other language) code that can be executed to produce outputs, including tables and charts, within the document. Hence, after changing your R code, you can just rerun all code in the RMarkdown file rather than redo the whole run-copy-paste cycle. And an RMarkdown file can be directly exported into multiple formats, including HTML, PDF,  and Word.

Exporting an R Notebook to RMarkdown
To export an R notebook to an RMarkdown file, first open up the notebook, then select File > Export >RMarkdown (), as shown in the figure below.

This will create a snapshot of your notebook and serialize it as an RMarkdown which will be downloaded to your browser.

You can then launch RStudio and upload the exported RMarkdown file. Below is a screenshot:

Importing RMarkdown files as Databricks Notebooks
Importing an RMarkdown file is no different than importing any other file types. The easiest way to do so is to right-click where you want it to be imported and select Import () in the context menu:


A dialog box would pop up, just as would with importing any other file types. Importing from both a file and a URL are supported:

You can also click next to a folder’s name, at the top of the workspace area, and select Import ():

Conclusion
Using RMarkdown, content can be easily shared between a Databricks R notebook and RStudio. That completes the seamless integration of RStudio in Databricks’ Unified Platform. You are welcome to try it out on the Databricks Community Edition for free.  For more information, please visit www.databricks.com/rstudio.

Read More
To read more about our efforts with SparkR on Databricks, we refer you to the following assets:

 

Try Databricks for free. Get started today.

The post Sharing R Notebooks using RMarkdown appeared first on Databricks.

Originally Posted at: Sharing R Notebooks using RMarkdown