Did you tell a friend about the bad experience?
Negative word of mouth can be devastating for company and product reputation. If companies can track it and do something to fix the problem, the damage can be contained.
This is one of the selling points of the Net Promoter Score. That is, customers who rate companies low on a 0 to 10 scale (6 and below) are dubbed âDetractorsâ because theyâre more likely spreading negative word of mouth and discouraging others from buying from a company. Companies with too much negative word of mouth would be unable to grow as much as others that have more positive word of mouth.
But is there any evidence that low scorers are really more likely to say bad things?
Is the NPS Scoring Divorced from Reality?
There is some concern that these NPS designations are divorced from reality. That is, thereâs no evidence (or reason) for detractors being classified as 0 to 6 and promoters being 9-10. If these designations are indeed arbitrary or make no sense, then itâs indeed concerning. (See the tweet comment from a vocal critic in Figure 1.)
To look for evidence of the designations, I re-read the 2003 HBR article by Fred Reichheld that made the NPS famous. Reichheld does mention that the reason for the promoter classification is customer referral and repurchase ratesÂ but doesnât provide a lot of detail (not too surprising given itâs an HBR article) or mention the reason for detractors here.
In his 2006 book, The Ultimate Question, Reichheld further explains the justification for the cutoff of detractors, passives, and promoters. In analyzing several thousand comments, he reported that 80% of the Negative Word of Mouth comments came from those who responded from 0 to 6Â on the likelihood to recommend item (pg 30). He further reiterated the claim that 80% of the customer referrals came from promoters (9s and 10s).
Contrary to at least one prominent UX voice on social media, there is some evidence and justification for the designations. Itâs based on referral and repurchase behaviors and the sharing of negative comments. This might not be enoughÂ evidence to convince people (and certainly not dogmatic critics) to use these designations though. It would be good to find corroborating data.
The Challenges with Purchases and Referrals
Corroborating the promoter designation means finding purchases and referrals. Itâs not easy associating actual purchases and actual referrals with attitudinal data. You need a way to associate customer survey data with purchases and then track purchases from friends and colleagues. Privacy issues aside, even in the same company, purchase data is often kept in different (and guarded) databases making associations challenging. It was something I dealt with constantly while at Oracle.
Whatâs more, companies have little incentive to share repurchase rates and survey data with outside firms and third parties may not have access to actual purchase history. Instead, academics and researchers often rely on reportedÂ purchases and reportedÂ referrals, which may be less accurate than records of actual purchases and actual referrals (a topic for an upcoming article). Itâs nonetheless common in the Market Research literature to rely on stated past behavior as a reasonable proxy for actual behavior. Weâll also address purchases and referrals in a future article.
Collecting Word-of-Mouth Comments
But what about the negative comments used to justify the cutoff between detractors and passives? We wanted to replicate Reichheldâs findings that detractors accounted for a substantial portion of negative comments using another dataset to see whether the pattern held.
We looked at open-ended comments we collected from about 500 U.S. customers regarding their most recent experiences with one of nine prominent brands and products. We collected the data ourselves from an online survey in November 2017. It included a mix of airlines, TV providers, and digital experiences. In total, we had 452 comments regarding the most recent experience with the following brands/products:
- American Airlines
- Delta Airlines
- United Airlines
- Dish Network
Participants in the survey also answered the 11-point Likelihood to Recommend question, as well as a 10-point and 5-point version of the same question.
Coding the Sentiments
The open-ended comments were coded into sentiments from two independent evaluators. Negative comments were coded -1, neutral 0, and positive 1. During the coding process, the evaluators didnât have access to the raw LTR scores (0 to 10) or other quantitative information.
In general, there was good agreement between the evaluators. The correlation between sentiment scores was high (r = .83) and they agreed 82% of the time on scores. On the remaining 18% where there was disagreement, differences were reconciled, and a sentiment was selected.
Most comments were neutral (43%) or positive (39%), with only 21% of the comments being coded as negative.
Examples of positive comments
âI flew to Hawaii for vacation, the staff was friendly and helpful! I would recommend it to anyone!ââAmerican Airlines Customer
âI love my service with Dish network. I use one of their affordable plans and get many options. I have never had an issue with them, and they are always willing to work with me if something has financially changed.ââDish Network Customer
Examples of neutral comments
âI logged onto Facebook, checked my notifications, scrolled through my feed, liked a few things, commented on one thing, and looked at some memories.ââFacebook User
âI have a rental property and this is the current TV subscription there. I access the site to manage my account and pay my bill.ââDirecTV User
Examples of negative comments
âI took a flight back from Boston to San Francisco 2 weeks ago on United. It was so terrible. My seat was tiny and the flight attendants were rude. It also took forever to board and deboard.ââUnited Airlines Customer
âI do not like Comcast because their services consistently have errors and we always have issues with the internet. They also frequently try to raise prices on our bill through random fees that increase over time. And their customer service is unsatisfactory. The only reason we still have Comcast is because it is the best option in our area.ââComcast Customer
Associating Sentiments to Likelihood to Recommend (Qual to Quant)
We then associated each coded sentiment with the 0 to 10 values on the Likelihood to Recommend item provided by the respondent. Figure 3 shows this relationship.
For example, 24% of all negative comments were associated with people who gave a 0 on the Likelihood to Recommend scale (the lowest response option). In contrast, 35% of positive comments were associated with people who scored the maximum 10 (most likely to recommend). This is further evidence for the extreme responder effect weâve discussed in an earlier article.
You can see a pattern: As the score increases from 0 to 10, the percent of negative comments go down (r = -.71) and the percent of positive comments go up (r = .87). There isnât a perfect linear relationship between comment sentiment and scores (otherwise the correlation would be r =1). For example, the percent of positive comments is actually higher at responses of 8 than 9 and the percent of negative comments is actually higher at 5 than 4 (possibly an artifact of this sample size). Nonetheless, this relationship is very strong.
Detractor Threshold Supported
Whatâs quite interesting from this analysis is that at a score of 6, the ratio of positive to negative comments flips. Respondents with scores above a 6 (7s-10s) are more likely to make positive comments about their most recent experience. Respondents who scored their Likelihood to Recommend at 6 and below are more likely to make negative comments (spread negative word of mouth) about their most recent experience.
At a score of 6, a participant is about 70% more likely to make a negative comment than a positive comment (10% vs 6% respectively). As scores go lower, the ratio goes up dramatically. At a score of 5, participants are more than three times as likely to make a negative comment as a positive comment. At a score of 0, customers are 42 times more likely to make a negative rather than a positive comment (0.6% vs. 24% respectively).
When aggregating the raw scores into promoters, passives, and detractors, we can see that a substantial 90% of negative comments are associated with detractorsÂ (0 to 6s). This is shown in Figure 4.
The positive pattern is less pronounced, but still a majority (54%) of positive comments are associated with promoters (9s and 10s). Itâs also interesting to see that the passives (7s and 8s) have a much more uniform chance of making a positive, neutral, or negative comment.
This corroborates the data from Reichheld, which showed 80% of negative comments were associated with those who scored 0 to 6. He didnât report the percent of positive comments with promoters and didnât associate the responses to each scale point as we did here (youâre welcome).
If your organization uses a five-point Likelihood to Recommend scale (5 = extremely likely and 1 = not at all likely), there are similar patterns, albeit on a more compressed scale (see Figure 5 ). At a response of 3, the ratio of positive to negative comments also flipsâmaking responses 3 or below also good designations for detractors. At a score of 3, a customer is almost four times as likely to make a negative comment about their experience than a positive comment.
Summary & Takeaways
An examination of 452 open-ended comments about customers most recent experience with nine prominent brands and products revealed:
- Detractors accounted for 90% of negative comments. This independent evaluation corroborates the earlier analysis by Reichheld that found detractors accounted for a majority of negative word-of-mouth comments. This smaller dataset actually found a higher percentage of negative comments associated with 0 to 6 responses than Reichheld reported.
- Six is a good threshold for identifying negative comments. The probability a comment will be negative (negative word of mouth) starts to exceed positive comment probability at 6 (on the 11-point LTR scale) and 3 (on a 5-point scale). Researchers looking at LTR scores alone can use this threshold to provide some idea about the probability of the customer sentiment about their most recent experience.
- Repurchase and referral rates need to be examined. This analysis didnât examine the relationship between referrals or repurchases (reported and observed) and likelihood to recommend, a topic for future research to corroborate the promoter designation.
- Results are for specific brands used. In this analysis, we selected a range of brands and products we expected to represent a good range of NPS scores (from low to high). Future analyses can examine whether the pattern of scores at 6 or below correspond to negative sentiment in different contexts (e.g. for the most recent purchase) or for other brands/products/websites.
- Think probabilistically. This analysis doesnât mean a customer who gave a score of 6 or below necessarily had a bad experience or will say bad things about a company. Nor does it mean that a customer who gives a 9 or 10 necessarily had a favorable experience. You should think probabilistically about UX measures in general and NPS too. That is, itâs more likely (higher probability) that as scores go down on the Likelihood to Recommend item, the chance someone will be saying negative things goes up (but doesnât guarantee it).
- Examine your relationships between scores and comments. Most companies we work with have a lot of NPS data associated with verbatim comments. Use the method of coding sentiments described here to see how well the detractor designation matches sentiment and, if possible, see how well the promoter designations correspond with repurchase and referral rates or other behavioral measures (and consider sharing your results!).
- Take a measured approach to making decisions. Many aspects of measurement arenât intuitive and itâs easy to dismiss what we donât understand or are skeptical about. Conversely, itâs easy to accept whatâs âalways been doneâ or published in high profile journals. Take a measured approach to deciding whatâs best (including on how to use the NPS). Donât blindly accept programs that claim to be revolutionary without examining the evidence. And donât be quick to toss out the whole system because it has shortcomings or is over-hyped (weâd have to toss out a lot of methods and authors if this were the case). In all cases, look for corroborating evidenceâ¦probably something more than what you find on Twitter.
The System Usability Scale has been around for decades and is used by hundreds of organizations globally.
The 10-item SUS questionnaire is a measure of a userâs perception of the usability of a âsystem.â
The SUS questionnaire is scored by combining the 10 items into a single SUS score ranging from 0 to 100. From its creation, though, John Brooke cautioned against interpreting individual items:
âNote that scores for individual items are not meaningful on their ownâ~John Brooke.
Brookeâs caution against examining scores for the individual items of the SUS was appropriate at the time. After all, he was publishing a âquick and dirtyâ questionnaire with analyses based on data from 20 people.
There is a sort of conventional wisdom that multiple items are superior to single items and in fact, single item measures and analysis are often dismissed in peer-reviewed journals.
More items will by definition increase the internal consistency reliability of a questionnaire when measured using Cronbachâs alpha. In fact, you canât measure internal consistency reliability with only one item. However, other methods measure reliability, including test-retest reliability. Single measures, such as satisfaction, brand attitude, task ease, and likelihood to recommend, also exhibit sufficient test-retest reliability and little if anything may be gained by using multiple items.
John Brooke didnât publish any benchmarks or guidance for what makes a âgoodâ SUS score. But because the SUS has been used extensively by other researchers who have published the results, we have been able to derive a database of scores. Table 1 shows SUS grades and percentiles that Jim Lewis and I put togetherÂ from that database, which itself is an adaptation of work from Bangor and Kortum.
|B||74.1 â 77.1||70 â 79|
To use the table, find your raw SUS score in the middle column and then find its corresponding grade in the left column and percentile rank in the right column. For example, a SUS score of 75 is a bit above the global average of 68 and nets a âBâ grade. A SUS score below 50 puts it in the âFâ grade with a percentile rank among the worst interfaces (worse than 86% or better than only 14%).
Why Develop Item-Level Benchmarks?
While the SUS provides an overall measure of perceived ease and our grading scale provides a way to interpret the raw score, researchers may want to measure and set targets for other more specific experience attributes (e.g. perceptions of findability, complexity, consistency, and confidence). To do so, researchers would need to develop specific items to measure those more specific attributes.
Some attributes, such as findability, do not appear in the 10 SUS items. Other attributes, such as perceived complexity (Item 2), perceived ease of use (Item 3), perceived consistency (Item 6), perceived learnability (Item 7), and confidence in use (Item 9) do appear in the SUS.
Researchers who use the SUS and who also need to assess any of these specific attributes would need to decide whether to ask participants in their studies to rate this attribute twice (once in the SUS and again using a separate item) or to use the response to the SUS item in two ways (contributing to the overall SUS score and as a measure of the specific attribute of interest). The latter, using the response to the SUS item in two ways, is the more efficient approach.
In short, using item benchmarks saves respondents time as they answer fewer items and saves researchers time as they donât have to derive new items and get the bonus of having benchmarks to make the responses more meaningful.
Developing SUS Item Level Benchmarks
To help make the process of understanding individual SUS items better, Jim Lewis and I compiled data from 166 unpublished industrial usability studies/surveys based on scores from 11,855 individual SUS questionnaires.
We then used regression equations to predict overall SUS scores from the individual items. We found each item explained between 35% and 89% of the full SUS score (a large percentage for a single item). Full details of the regression equations and process are available in the Journal of Usability Studies article.
To make item benchmarks easy to reference, we computed the score youâd need for an average âCâ score of 68 or a good score of 80, an âA-.â Why 80? Weâve found that a SUS of 80 has become a common industrial goal. Itâs also a good psychological threshold thatâs attainable. Achieving a raw SUS score of 90 sounds better but is extraordinarily difficult (only one study in the database exceeded 90âdata from Netflix).
Table 2 shows the mean score you would need for each item to achieve an average âCâ or good âA-â score.
|SUS Item||Target for Average Score||Target for Good Score|
|1. I think that I would like to use this system frequently.||â¥ 3.39||â¥ 3.80|
|2. I found the system unnecessarily complex.||â¤ 2.44||â¤ 1.85|
|3. I thought the system was easy to use.||â¥ 3.67||â¥ 4.24|
|4. I think that I would need the support of a technical person to be able to use this system.||â¤ 1.85||â¤ 1.51|
|5. I found the various functions in this system were well integrated.||â¥ 3.55||â¥ 3.96|
|I thought there was too much inconsistency in this system.||â¤ 2.20||â¤ 1.77|
|7. I would imagine that most people would learn to use this system very quickly.||â¥ 3.71||â¥ 4.19|
|8. I found the system very cumbersome to use.||â¤ 2.25||â¤ 1.66|
|9. I felt very confident using the system.||â¥ 3.72||â¥ 4.25|
|10. I needed to learn a lot of things before I could get going with this system.||â¤ 2.09||â¤ 1.64|
For example, if youâre using Item 3, âI thought the system was easy to use,â then a mean score of 3.67 would correspond to a SUS score of 68 (an average overall system score). For an above average SUS score of 80, the corresponding target for Item 3 would be a mean score of at least 4.24.
Note that due to the mixed tone of the SUS, the directionality of the item targets is different for odd- and even-numbered items. Specifically, for odd-numbered items, means need to be greater than the targets; for even-numbered items, observed means need to be less than the targets. For example, for Item 2, âI found the system unnecessarily complex,â you would want to have a mean below 2.44 to achieve an average score (SUS equivalent of 68) and below 1.85 for a good score (SUS equivalent of 80).
The popularity of the SUS has allowed for the creation of normalized databases and guidance on what constitutes poor, good, or excellent scores. Researchers on some occasions may want to use single items from the SUS to benchmark more specific constructs (e.g. âI felt very confident using the systemâ representing user confidence). Using data from almost 12,000 participants we were able to create benchmarks for individual SUS items to achieve average âCâ scores and high âA-â SUS equivalent scores. These benchmarks allow researchers to know what mean value to aim for to achieve an average or good experience when interpreting single items from the SUS.
For most of computingâs history, data meant âstructuredâ data or data that fits neatly into pre-defined categories and rows stored in databases or spreadsheets. But the big data movement has changed all of that with the proliferation of unstructured data analysis. Unstructured data is any data that doesnât fit into a predefined data model. It includes things like video, images, text, and all the data being logged by sensors and the myriad of digital devices. Where structured data is relatively easy to store and analyze using traditional technology, unstructured data isnât.
None-the-less, today, massive collections of unstructured data are being analyzed for altruistic purposes like combating crime and preventing disease, but also for profit motivated goals like spotting business trends. And, as weâve entered an era of pervasive surveillance â including aerial surveillance by drones and low earth orbit satellites capable of delivering 50 cm resolution imagery â media content (photos, videos and audio) are more relevant to big data analytics than ever before.
Unstructured data tends to be vastly larger than structured data, and is mostly responsible for our crossing the threshold from regular old data to âbig data.â That threshold is not defined by a specific number of terabytes or even petabytes, but by what happens when data accumulates to an amount so large that innovative techniques are required to store, analyze and move it. Public cloud computing technology is one of these innovations thatâs being applied to big data analytics because it offers a virtually unlimited elastic supply of compute power, networking and storage with a pay-for-use pricing model (all of which opens up new possibilities for analyzing both unstructured and structured big data).
Before their recent and unfortunate shutdown, the respected tech news and research site GigaOM released a survey on enterprise big data. In it over 90% of participants said they planned to move more than a terabyte of data into the cloud, and 20% planned to move more than 100 TB. Cloud storage is a compelling solution as both an elastic repository for this overflowing data and a location readily accessible to cloud-based analysis.
However, one of the challenges that come with using public cloud computing and cloud storage is getting the data into the cloud in the first place. Moving large files and bulk data sets over the Internet can be very inefficient with traditional protocols like FTP and HTTP (the most common way organizations move large files, and the foundation for most options cloud storage providers offer to get your data to them besides shipping hard drives).
In that same GigaOm survey, 24% expressed concern about whether their available bandwidth can accommodate pushing their large data volumes up to the cloud, and 21% worry that they donât have the expertise to carry out the data migration (read about all the options for moving data to any of the major cloud storage providers, and you too might be intimidated).
While bandwidth and expertise are very legitimate concerns, there are SaaS (Software as a Service) large file transfer solutions that can make optimal use of bandwidth, are very easy to use and integrate with Amazon S3, Microsoft Azure and Google Cloud. In fact, the foundation technology of these solutions was originally built to move very large media files throughout the production, post production and distribution of film and television.
Back in the early 2000âs, when the Media & Entertainment industry began actively transitioning from physical media including tape and hard drives to digital file-based workflows, they had a big data movement problem too. For companies like Disney and the BBC, sending digital media between their internal locations and external editing or broadcast partners was a serious issue. Compared to everything else moving over the Internet, those files were huge. (And broadcast masters are relatively small compared to the 4K raw camera footage being captured today. For example, an hour of raw camera footage often requires a terabyte or more of storage.)
During M&Eâs transition from physical media to file-based media, companies like Signiant started developing new protocols for the fast transfer of large files over public and private IP networks, with the high security that the movie industry requires for their most precious assets. The National Academy of Television Arts and Sciences even recognized Signiantâs pioneering role with a Technology and Engineering Emmy award in 2014.
Today, that technology has evolved in step with the cloud revolution, and SaaS accelerated large file transfer technology is expanding to other industries. Far faster and more reliable than older technologies like FTP and HTTP, this solution can also be delivered as a service, so users do not have to worry about provisioning hardware and software infrastructure, including scaling and balancing servers for load peaks and valleys. The âexpertiseâ many worry about needing is a non-issue because the solution is so simple to use. And itâs being used in particular to push large volumes to cloud storage for all kinds of time-sensitive projects, including big data analytics. For example, scientists are analyzing images of snow and ice cover to learn more about climate change, and (interesting though less benevolent) businesses are analyzing images of competitorsâ parking lots â counting cars by make and model â in order to understand the shopping habits and demographics of their customers.
Itâs always fascinating to see how innovation occurs. It almost never springs from nothing, but is adapted from techniques and technologies employed somewhere else to solve a different challenge. Who would have thought, at the turn of the century, that the technology developed for Media & Entertainment would be so relevant to big data scientific, government and business analytics? And that technology used to produce and delivery entertainment could be leveraged for the betterment of society?
Originally posted via “Borrowing Technology from Media & Entertainment for Big Data Analytics in the Cloud”
In my new book, Total Customer Experience, I illustrate why three types of customer loyalty are needed to understand the different ways your customers can show their loyalty towards your company or brand. The three types of loyalty are:
- Retention Loyalty: likelihood of customers to stay with a company
- Advocacy Loyalty: likelihood of customers to recommend the company/ advocate on the company’s behalf
- Purchasing Loyalty: likelihood of customers to expand their relationship with the company
Using this multi-faceted model, I developed a loyalty measurement approach, referred to as the RAPID Loyalty Approach, to help companies get a more comprehensive picture of customer loyalty. Understanding the factors that impact these different types of loyalty helps companies target customer experience improvement strategies to increase different types of customer loyalty.
When companies are able to link these RAPID loyalty metrics with other customer information, like purchase history, campaign responses and employee/partner feedback, the customer insights become deeper. TCELab Â (where I am the Chief Customer Officer) is working with Clicktools to help Salesforce customers implement the RAPID Loyalty Approach. This partnership brings together TCELab’s survey knowledge and advisory services with Clicktools’ exceptional feedback software and Salesforce integration; for the fifth consecutive year, ClicktoolsÂ has received the Salesforce AppExchangeâ¢ Customer Choice Award for Best Survey App.
TCELab will include RAPID surveys in Clicktools’ survey library, available in all Clicktools editions and Â integrated easilywith a RAPID Salesforce.com custom object.Â Salesforce reports and dashboards, including linkage analysis will follow.Â Customers can call on the expertise of TCELab for advice on tailoring the surveys for their organization and for support in analysis and reporting.
Joint Whitepaper from TCELab and Clicktools
David Jackson, founder and CEO of Clicktools, and I have co-written a whitepaper titled, âRAPID Loyalty: A Comprehensive Approach to Customer Loyalty,â to present the basic structure and benefits of the RAPID approach and to offer Clicktools customers access to a special program for getting started.
Emergence of #DataOps Age – @AndyHPalmer #FutureOfData
In this podcast @AndyPalmer from @Tamr sat with @Vishaltx from @AnalyticsWeek to talk about the emergence / need / market for Data Ops, a specialized capability emerging from merging data engineering and dev ops ecosystem due to increased convoluted data silos and complicated processes. Andy shared his journey on what some of the businesses and its leaders are doing wrong and how businesses needs to rethink their data silos to future proof themselves. This is a good podcast for any data leader thinking about cracking the code on getting high quality insights from data.
Andy’s Recommended Read:
Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker https://amzn.to/2Lc6WqK
The Three-Body Problem by Cixin Liu and Ken Liu https://amzn.to/2rQyPvp
Andy Palmer is a serial entrepreneur who specializes in accelerating the growth of mission-driven startups. Andy has helped found and/or fund more than 50 innovative companies in technology, health care and the life sciences. Andyâs unique blend of strategic perspective and disciplined tactical execution is suited to environments where uncertainty is the rule rather than the exception. Andy has a specific passion for projects at the intersection of computer science and the life sciences.
Most recently, Andy co-founded Tamr, a next generation data curation company and Koa Labs, a start-up club in the heart of Harvard Square, Cambridge, MA.
Specialties: Software, Sales & Marketing, Web Services, Service Oriented Architecture, Drug Discovery, Database, Data Warehouse, Analytics, Startup, Entrepreneurship, Informatics, Enterprise Software, OLTP, Science, Internet, ecommerce, Venture Capital, Bootstrapping, Founding Team, Venture Capital firm, Software companies, early stage venture, corporate development, venture-backed, venture capital fund, world-class, stage venture capital
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.
If you or any you know wants to join in,
Register your interest and email at email@example.com
Want to sponsor?
Email us @ firstname.lastname@example.org
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy
Discussing the World of Crypto with @JoelComm / @BadCrypto #FutureOfData
In this podcast Joel Comm from The Bad Crypto Podcast sat with Vishal Kumar, CEO AnalyticsWeek and discuss the World of Crypto Currencies. The discussion sheds light into the nuances in the rapidly exploding world of Crypto Currencies, some of the thinking behind the currencies. The discussion also sheds light into the opportunities and risks in the industry. Joel sheds his insights about how to think about theses currencies and long term implications of the algos that run these currencies. The podcast is a great listen for anyone who wants to understand the world of crypto currencies.
*please note, this podcast and / or its content in no ways advocate any investment advice and nor intended to generate any positive or negative influence. Crypto Currencies are highly volatile in nature and any investor must use absolute caution and care while evaluating such currencies.*
Joel’s Recommended Read:
Cryptocurrencies 101 By James Altucher http://bit.ly/2Bi5FMv
As a knowledgable & inspirational speaker, Joel speaks on a variety of business and entrepreneurial topics. He presents a step-by-step playbook on how to use social media as a leveraging tool to expand the reach of your brand, increase your customer base, and create fierce brand loyalty for your business. Joel is also able to speak with authority on the various ways to harness the marketing power of technology to explode profits. He offers an inspiring yet down-to-earth call to action for those who dream of obtaining growth and financial success. As someone who went from having only 87 cents in his bank account to creating multiple successful businesses, Joel is uniquely poised to instruct and inspire when it comes to using the various forms of new media as avenues towards the greater goal of business success. He is a broadcast veteran with thousands of hours in radio, podcasting, television and online video experience. Joel is the host of two popular, yet completely different podcasts. FUN with Joel Comm features the lighter side of top business and social leaders. The Bad Crypto Podcast makes cryptocurrency and bitcoin understandable to the masses.
Joel is the New York Times best-selling author of 14 books, including The AdSense Code, Click Here to Order: Stories from the World’s Most Successful Entrepreneurs, KaChing: How to Run an Online Business that Pays and Paysm Twitter Power 3.0 and Self Employed: 50 Signs That You Might Be an Entrepreneur. He has also written over 40 ebooks. He has appeared in The New York Times, on Jon Stewart’s The Daily Show, on CNN online, on Fox News, and many other places.
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/
Want to sponsor?
Email us @ email@example.com
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy
In this podcast Nick Howe (@NickJHowe) from @Area9Learning talks about the transforming world of learning landscape. He shed light on some of the learning challenges and some of the ways learning could match the evolving world and its learning needs. Nick sheds light on some tactical steps that businesses could adopt to create world class learning organization. This podcast is must for learning organization.
Nick’s Recommended Read:
The End of Average: Unlocking Our Potential by Embracing What Makes Us Different by Todd Rose https://amzn.to/2kiahYN
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom https://amzn.to/2IAPURg
Nick Howe is an award winning Chief Learning Officer and business leader with a focus on the application of innovative education technologies. He is the Chief Learning Officer at Area9 Lyceum – one of global leaders in adaptive learning technology, a Strategic Advisor to the Institute of Simulation and Training at the University of Central Florida, and board advisor to multiple EdTech startups.
For twelve years Nick was the Chief Learning Officer at Hitachi Data Systems where he built and led the corporate university and online communities serving over 50,000 employees, resellers and customers.
With over 25 yearsâ global sales, sales enablement, delivery and consulting experience with Hitachi, EDS Corporation and Bechtel Inc., Nick is passionate about the transformation of customer experiences, partner relationships and employee performance through learning and collaboration
#JobsOfFuture podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.
Want to sponsor?
Email us @ firstname.lastname@example.org
#JobsOfFuture #Leadership #Podcast #Future of #Work #Worker & #Workplace
If the defining characteristic of data management in 2018 is the heterogeneity of contemporary computing environments, then Blockchain is a considerable factor contributing to its decentralization.
Expectations for this distributed ledger technology are decidedly high. Its low latency, irrefutable transaction capabilities are overtaking so many verticals that one of Forresterâs Top 10 Technology Trends To Watch: 2018 to 2020 predicts that by 2019 âa viable blockchain-based market will be commercialized.â
Blockchainâs growing popularity is directly attributed to its utilitarian nature, which supersedes individual industries and use cases. Itâs not just a means of revolutionizing finance via cryptocurrencies such as Bitcoin, but of implementing new security paradigms, legal measures, and data sources for Artificial Intelligence. Most importantly, it could very well herald the end of silo culture.
By helping to seamlessly connect heterogeneous databases around the world in a peer-to-peer fashion, its overall impact is projected to be âas disruptive as the internet was 20 years agoâand still isâ according to Algebraix Data CEO Charlie Silver.
For it to realize this future, however, there are a number of points of standardization within and between blockchain networks which must solidify.
They should begin doing so in earnest in the coming year.
Private Blockchains, Centralized Authority
The most common use of blockchain is for validating transactions and issuing monetary value for cryptocurrencies. These are typical instances of what is known as public blockchains, in which the ledger is distributed amongst individuals or businesses for sharing and determining the integrity of transactional data. What could truly trigger the expansion of Blockchainâs adoption, however, is the growing credence associated with private blockchains. These networks extend to only members of well-defined (i.e. not open to the general public) participants, such as those in a supply chain or for some other discreet business purpose. The strength of public blockchains is largely in the lack of a central authority which adds to the indisputable nature of transactions. In private blockchains, however, that centralized authority is the key to assuring the integrity of data exchanges. âWhat that means is there is a blockchain orchestrator that enables the interactions between the parties, coordinates all those things, provides the governance, and then when the transaction is doneâ¦you have permanent immutability, and transparency with permissions and so on,â commented One Network SVP of Products Adeel Najmi.
In addition to providing governance standards for all parties in the network, a centralized mediator also facilitates consistency in semantics and metadata which is crucial for exchanging data. Without that centralization, blockchain communities must define their own data governance protocols,Â semantic standards, and Master Data Management modeling conventions. The notion of standards and the legality of exchanges between blockchain in the form of smart contracts will also come to prominence in 2018. Smart contracts involve denoting what various exchanges of data mean, what takes place when such data is transmitted, and what parties must do in agreement with one another for any variety of transactions. However, the dearth of standards for blockchainâparticularly as they might apply between blockchainsâleads to questions of legality of certain facets of smart contracts. According to Gartner: âMuch of the legal basis for identity, trust, smart contracts, and other components are undefined in a blockchain context. Established laws still need to be revised and amended to accommodate blockchain use cases, and financial reporting is still unclear.â These points of uncertainty regarding blockchain correlate to its adoption rate, yet are to be expected for an emerging technology. Silver noted, âLike in the Internet of Things, thereâs all kinds of standards debates going on. All new technologies have it.â
One of the more exciting developments to emerge in 2018 will be the synthesis of blockchain technologies with those for AI. There are a number of hypothetical ways in which these two technologies can influenceâand aidâone another. Perhaps one of the more concrete ones is that the amounts of data involved in blockchain make excellent sources to feed the neural networks which thrive on copious big data quantities. According to Gartner VP Distinguished Analyst Whit Andrews, in this respect Blockchainâs impact on AI is similar to that of the Internet of Thingâs impact on AI. âJust like IoT, [Blockchainâs] creating a whole lot of data about different things making it possible for organizations to serve as an authority where previously they had to rely on others,â Andrews explained. âThatâs where Blockchain changes everything.â In public decentralized blockchains, the newfound authorization of business partners, individuals, or companies can enable the sort of data quantities which, if properly mined, contribute to newfound insights. âSo, maybe Artificial Intelligence again emerges as an exceptional way of interpreting that data stream,â Andrews remarked.
What is certain, however, is that the intersection of these two technologies is still forthcoming. Andrews indicated approximately one in 25 CIOs are employing AI today, and âthe figure is similar with blockchain.â 2018 advancements related to deploying AI with Blockchain pertain to resolving blockchainâs scalability to encompass exorbitant big data amounts. Najmi observed that, âDue to scalability and query limitations of traditional blockchains it is difficult to implement intelligent sense and respond capabilities with predictive, prescriptive analytics and autonomous decision making.â
Blockchain is reducing the persistence of silo culture throughout data management in two fundamental ways. The first is related to its low latency. The boons of a shared network become minimized if it takes inordinate amounts of times for transactions. Granted, one of the factors in decentralized blockchains is that there is a validation period. Transactions might appear with low or no latency, but they still require validation. In private blockchains with a centralized mediator, that validation period is reduced. Nonetheless, the main way implementing blockchain reduces silos is simply by connecting databases via the same ledger system. This latter aspect of blockchain is one of the reasons it is expanding across industries such as insurance and real estate. âIn the U.S. and maybe in Western Europe there is good infrastructure for finding out real estate information such as who owns what, who’s got a mortgage, etc.,â Silver said. âBut 90 percent of the world doesnât have that infrastructure. So think about all the global real estate information now being accessible, and the lack of silos. Thatâs the perfect use case of information getting de-siloed through Blockchain.â
At this point, the potential for Blockchain likely exceeds its practical utility for information assets today. Nonetheless, with its capabilities applicable to so many different facets of data management, its influence will continue to grow throughout 2018. Those capabilities encompass significant regions of finance, transactional data, legality (via smart contracts), and AI. Adoption rates ultimately depend on the viability of the public and private paradigmsâboth how the latter can impact the former and vice versa. The key issue at stake with these models is the resolution of standards, semantics, and governance needed to institutionalize this technology. Once thatâs done, Blockchain may provide a novel means of integrating both old and new IT systems.
âIf you think about the enterprise, itâs got 20, 30 years of systems that need to interoperate,â Silver said. âOld systems donât just die; they just find a new way to integrate into a new architecture.â
Exactly what blockchainâs role in that new architecture will be remains to be seen.
At Databricks, we are thrilled to announce the integration of RStudio with the DatabricksÂ Unified Analytics
Databricks Unified Analytics Platform now supports RStudio Server (press release). Users often ask if they can move notebooks between RStudio and Databricks workspace using RMarkdown â the most popular dynamic R document format. The answer is yes, you can easily export any Databricks R notebook as an RMarkdown file, and vice versa for imports. This allows you to effortlessly share content between a Databricks R notebook and RStudio, combining the best of both environments.
What is RMarkdown
RMarkdown is the dynamic document format RStudio uses. It is normal Markdown plus embedded R (or any other language) code that can be executed to produce outputs, including tables and charts, within the document. Hence, after changing your R code, you can just rerun all code in the RMarkdown file rather than redo the whole run-copy-paste cycle. And an RMarkdown file can be directly exported into multiple formats, including HTML, PDF, Â and Word.
Exporting an R Notebook to RMarkdown
To export an R notebook to an RMarkdown file, first open up the notebook, then select File > Export >RMarkdown (), as shown in the figure below.
This will create a snapshot of your notebook and serialize it as an RMarkdown which will be downloaded to your browser.
You can then launch RStudio and upload the exported RMarkdown file. Below is a screenshot:
Importing RMarkdown files as Databricks Notebooks
Importing an RMarkdown file is no different than importing any other file types. The easiest way to do so is to right-click where you want it to be imported and select Import () in the context menu:
You can also click next to a folderâs name, at the top of the workspace area, and select Import ():
Using RMarkdown, content can be easily shared between a Databricks R notebook and RStudio. That completes the seamless integration of RStudio in Databricksâ Unified Platform. You are welcome to try it out on the Databricks Community Edition for free.Â For more information, please visitÂ www.databricks.com/rstudio.
To read more about our efforts with SparkR on Databricks, we refer you to the following assets:
- Parallelizing Large Simulations with Apache SparkR on Databricks
- On-Demand Webinar and FAQ: Parallelize R Code Using Apache Spark
- Benchmarking Big Data SQL Platforms in the Cloud
- Using sparklyr in Databricks
Try Databricks for free. Get started today.