The One Number You Need to Grow (A Replication)

The one number you need to grow.

That was the title of the 2003 HBR article by Fred Reichheld that introduced the Net Promoter Score as a way to measure customer loyalty.

It’s a strong claim that a single attitudinal item can portend company success. And strong claims need strong evidence (or at least corroborating evidence).

In an earlier article, I examined the original evidence put forth by Reichheld and looked for any other published evidence and discussed the findings at the event, How Harmful is the Net Promoter Score?

To establish the validity and make the claim that the NPS predicts growth, Fred Reichheld reported that the NPS was the best or second-best predictor of growth in 11 of 14 industries (p. 28).

The data he provided in the appendix of his 2006 book The Ultimate Question to support the relationship shows data from 35 companies in six industries (computers, life insurance, Korean auto insurance, U.S. airlines, Internet Service Providers, and UK supermarkets). His 2003 HBR article contained five more companies and one additional industry (rental cars) for a total of 40 companies and 7 industries.

Close examination of the data reveals that Reichheld used historical, not future growth. He showed the three-year average growth rates (1999–2002) correlated with the two-year average Net Promoter Scores (2001–2002). In other words, the NPS correlated with past growth rates (as opposed to future growth rates). This does establish validity (a sort of concurrent validity) but not predictive validity.

To assess the predictive ability of the NPS, I looked at the U.S. airline industry in 2013 and found a strong correlation between future growth and NPS (but only after accounting for a major merger in the industry).

The published literature on the topic in the last 15 years isn’t terribly helpful either. I found eight other studies that examined the NPS’s predictive ability (Figure 1). I was, however, a bit disappointed in the quality of many of the studies given the ubiquity of the Net Promoter Score.

As Figure 1 shows, three of the eight studies found medium to strong correlations but used historical or current revenue (not future). Of the five remaining studies that used future metrics, two were authored by a competitor of Satmetrix (a possible competitive bias) and one was from a book with connections to Satmetix and not peer reviewed (with an agenda to promote the NPS).

Figure 1: Summary of papers examining the NPS and growth (many used historical revenue or had methodological flaws—like not actually using the 11-point LTR item).

Surprisingly, two of the three studies that looked at future metrics didn’t use the 11-point Likelihood to Recommend question (Keiningham et al., 2007b; Morgan and Rego, 2006). One study that used a 10-point version that found no correlation with business growth also found no correlation with any metrics at the firm level for three Norwegian industries it examined (Keiningham et al., 2007a), which was an unusual finding given all other studies found some correlation with metrics.

Only the study by de Haan et al. (2015) actually used the 11-point Likelihood to Recommend item and found the Net Promoter Score did have a small correlation with future intent (collected in a longitudinal study). It wasn’t the best predictor, but it did correlate with future metrics (which was similar to the finding from the study by Keiningham et al., 2007b using a 5-point LTR).

I think there are at least two reasons for the dearth of published data examining the NPS and growth:

  1. Little upside: There’s little upside for Satmetrix and Reichheld to fund and publish more research to establish the predictive validity of the NPS. If it’s already in wide usage (most Fortune 500 companies use it), then there’s little to gain. That Reichheld didn’t include more data in his 2nd edition of The Ultimate Question likely supports this. (He even excluded the appendix that was in the 1st.)
  2. It’s difficult: Predicting revenue at the customer or company level requires data from two points in time. Longitudinal data takes time to collect (by definition years in this case). It’s also hard to associate attitudinal data to financial performance. Companies have little reason to expose their own data and third-party firms have trouble getting access.

Predicting Future Growth with the Original Data

A few papers I cited above pointed out the problem with Reichheld using historical revenue to show future growth but none I found actually looked to see whether the published NPS data predicted future growth for the same industries. Keiningham (2007a) did use some of Reichheld’s data to show that the American Consumer Satisfaction Index was an equal or better predictor of historical revenue, but didn’t look at future growth.

So, I revisited the very data used to establish the NPS validity—the 1999–2002 Net Promoter Score data Reichheld published in his 2006 book appendix and 2003 HBR article.

With the help of research assistants, I dug through old annual reports, press releases, articles, and the Internet Archive to match the financial metrics collected more than 15 years ago. It wasn’t easy, as many companies merged or went out of business, and whole industries morphed (AOL anyone?). We had to piece together numbers from many different sources and make some assumptions (noted below).

After several weeks of digging we had good results and were able to find data for the same six industries used in the 2006 book plus the one industry included in the HBR article for the years 2002–2006. Table 1 shows the industry, the metric we used, the year the NPS data was reported in Reichheld’s book, the current/historical years Reichheld used, and then the years we found data for to predict future growth.

Industry Metric NPS Data Reichheld Years Our Future Years
U.S. PC market PC Shipments 2001-2001 1999-2002 2002-2005
U.S. Life Insurance market Life premiums 2001-2002 1999-2003 2002-2005
U.S. airlines market Sales 2001-2002 1999-2002 2002-2005
U.S. Internet Service Providers Sales 2002 1999-2002 2002-2005
U.S. car rental market Revenue 2002 1999-2002 2002-2005
UK supermarkets Sales 2003 1999-2003 2003-2006
Korean auto insurance Sales 2003 2001-2003 2003-2006

Table 1: Industries used to establish the predictive ability of the Net Promoter Score from The Ultimate Question and the 2003 HBR article.

Results

We used two future growth periods to assess the predictive validity of the NPS. The first are the two years immediately following the NPS data (and graphed below). For the U.S. industries this was 2002–2003; for the international industries this was 2003–2004 (which matches the years of NPS data Reichheld used). The second includes a longer period of three to four years of growth (2002–2005 for U.S. industries and 2003–2006 for international). We computed Pearson correlations for each industry, then averaged the correlations using the Fisher Z transformation to account for the non-normality in correlations. Finally, we converted the correlations to R2 values to match the fit statistic reported in The Ultimate Question.

Reichheld notes that they found the log of the change in NPS would boost the explanatory power (R2) of NPS but they reported only raw NPS numbers in the appendix. With only one year of NPS data, we didn’t have changes in the NPS so we replicated the approach in the appendix using only the data from the single Net Promoter Scores.

Table 2 shows the results for Reichheld’s originally reported R2 values using current or historical revenue and our R2 values for the subsequent two and four years.

A bit to my surprise (given the many vocal critics and lack of published data), we found evidence that the Net Promoter Score predicted growth in both the subsequent two- and four-year periods. On average we found the Net Promoter Scores reported by Reichheld explained 38% of the changes in growth for the seven industries examined for the immediate two years (low of 8% to a high of 76%). The explanatory power decreased some when the future period increased (which is not too surprising given what can change in four years). For the four-year period, the average explanatory power of the NPS is still 30% (low of 4% to a high of 79%).

To put these R2 values into perspective, the SAT can explain (predict) around 25% of first year college grades, which means these R2 values are impressively large.

  Reichheld Historical R^Sq 2-Year Future Growth R^Sq 4-Year Future Growth R^Sq
U.S. PC market 68% 27% 75%
U.S. Insurance market 86% 39% 4%
U.S. airlines market 68% 8% 22%
U.S. Internet Service Providers 93% 20% 2%
U.S. car rentals 28% 8% 8%
UK supermarkets 84% 76% 79%
Korean auto insurance 68% 48% 12%
Avg R2(Fisher Transformed) 76% 38% 30%

Table 2: R2 values of seven industries from Reichheld’s NPS data compared to historically reported revenue and two-year and four-year growth rates by industry. The Fisher R to Z transformation was used to average the correlations before converting to R2 averages. *Reichheld reported an R2 of 68% for Korean auto but our replication from the scatterplots generated a value of ~30%. See other notes below by industry.

Below we have re-created the bubble scatterplots from Reichheld and compared that with our two-year future data. We estimated the regression lines, R2 values and bubble size using a similar approach as described in Keiningham et al 2007a.

PC Shipments

Historical R2 = 74% Future (2 Years): R2 = 27%
nps gateway dell 26

Note: Compaq was purchased by Dell so is not included in future years. IBM sold its PC industry to Lenovo in 2005 so calculation only includes growth rates between 2002–2004 instead of 2002–2005. Gateway merged with eMachines in 2004; growth rates are also only 2002–2004 and only include Gateway numbers.

US Life Insurance

Historical R2 = 86% Future (2 Years): R2 = 39%
nps life premium 2001-2002 nps life premium 2002-2003 39

Note: For Prudential we used growth rates in British pounds, but bubble size on the chart is determined by converted number of life premiums in U.S. dollars.

US Airlines

Historical R2 = 66% Future (2 Years): R2 = 8%
nps airlines 82

Note: TWA stopped operations in 2001 and wasn’t included in calculation for future years. America West Airlines four-year growth period is between 2002–2004 as they merged with US Airways Group in 2005.

Internet Service Providers (ISPs)

Historical R2 = 89% Future (2 Years): R2 = 20%
nps internet service provider 22

 

UK Grocery Stores

Historical R2 = 81% Future (2 Years): R2 = 76%
nps groceries 81 nps groceries 76

Note: For ASDA we used growth rates in USD, but the bubble size on the chart is determined by converted number of sales in British pounds.

Korean Auto Insurance

Historical R2 = 68%/30%* Future (2 Years): R2 = 48%
nps korean auto insurance 30 nps korean auto insurance 47

Note: Reichheld reports an R2 of 68% but we calculated a much lower R2 of 30% from the same data.

U.S. Rental Cars

Historical R2 = 28% Future (2 Years): R2 = 17%
nps car rentals 28 nps car rentals 17

Note: In 2003 Vanguard Group purchased National and Alamo brands and didn’t separate the revenue so they are excluded in the future analysis.

 

Summary

A re-examination of the original NPS data using future (rather than historical revenue growth) found:

The NPS explains immediate firm growth in selected industries. On average we found NPS data can explain 38% of the variability in company growth metrics in seven industries at the company/firm level. This is less than half the explanatory power of historical growth reported by Reichheld (76%) but still represents a substantial amount relative to other behavioral science measures. While not as impressive, it still suggests the NPS is a leading indicator of future growth rates, at least in some selected industries for some time periods at the company level.

The NPS is still predictive of more distant growth. The explanatory power of the NPS still remained at a solid 30% for a four-year future growth period. This suggests that established company policies and growth patterns can remain in effect for years (but not always) and the NPS may still portend the more distant future (again in these selected industries and years).

Industry changes are hard to predict with few data points. Companies merge, industries morph, and unexpected changes can happen that affect a company’s growth and consequently the predictive ability of any measure, including the NPS. This was seen in the car rental industry (National merged) and the PC industry (IBM sold to Lenovo) and the airline industry (TWA was acquired after bankruptcy ). When an industry has few data points (e.g. ISPs with only three), only the strongest relationships are detectible and small changes in one year can completely remove any evidence for a relationship between NPS and growth.

Prediction is imprecise. The NPS may be a victim of its own success with its hype leading many to dismiss it unless it’s a perfect predictor of growth. (After all the headline indicated it’s the ONE number you need to grow!) Making predictions is difficult and imprecise but this analysis suggests the NPS does have reasonable predictive ability, at least as high as other high-stakes measures like college entrance exams. It’s unlikely always the superior measure in every industry, given our earlier analyses on satisfaction but this data again suggests it may be an adequate proxy measure of future growth for many industries.

There is a possible selection bias. We limited our analysis to the industries, companies, and metrics reported by Reichheld. It’s likely that these are the best illustrations of the NPS’s predictive (or post-dictive) ability and may not be representative of all industries. Reichheld himself reported that the NPS wasn’t always the best predictor of growth (only in 11/14 industries). A future analysis will look at a broader range of the seven industries shown here as well as examinations at the customer level.

 

Sources

Below are the sources where we found growth rates to match those reported in Reichheld so you can check our work and assumptions (let us know if you see a discrepancy).

US PC market (All Firms)

US Life insurance market

US Airlines

US Internet Service Providers

UK supermarkets

Korean auto insurance

US Car rental

(function() {
if (!window.mc4wp) {
window.mc4wp = {
listeners: [],
forms : {
on: function (event, callback) {
window.mc4wp.listeners.push({
event : event,
callback: callback
});
}
}
}
}
})();

Sign-up to receive weekly updates.

Source: The One Number You Need to Grow (A Replication) by analyticsweek

Nate-Silvering Small Data Leads to Internet Service Provider (ISP) industry insights

There is much talk of Big Data and how it is changing/impacting how businesses improve the customer experience. In this week’s post, I want to illustrate the value of Small Data.

Internet Service Providers (ISPs) receive the lowest customer satisfaction ratings among the industry sectors measured by the American Customer Satisfaction Index (ACSI). As an industry, then, the ISP industry has much room for improvement, some more than others. This week, I will use several data sets to help determine ISP intra-industry rankings and how to improve  their inter-industry ranking.

Table 1. Internet Service Provider Ratings
Table 1. Internet Service Provider Ratings

I took to the Web to find several publicly available and relevant data sets regarding ISPs. In all, I found 12 metrics from seven different sources for 27 ISPs. I combined the data sets by ISP. By merging the different data sources, we will be able to uncover greater insights about these different ISPs and what they need to do to increase customer loyalty. The final data set appears in Table 1. The description of each metric appears below:

  • Broadband type: The types of broadband were from PCMag article.
  • Actual ISP Speed: Average speed for Netflix streams from November 2012: Measured in megabits per second (Mbps).
  • American Customer Satisfaction Index (ACSI): an overall measure of customer satisfaction from 2013. Ratings can vary from 0 to 100.
  • Temkin Loyalty Ratings: Based on three likelihood questions (repurchase, switch and recommend) from 2012. Questions are combined and reported as a “net score,” similar to the NPS methodology. Net scores can range from -100 to 100.
  • JD Power: A 5-star rating system for overall satisfaction from 2012. 5 Star = Among the best; 4 Star = Better than most; 3 Star = About average; 2 Star = The rest.
  • PCMag Ratings (6 metrics: Recommend to Fees): Ratings based on customer survey that measured different CX areas in 2012. Ratings are based on a 10-point scale.
  • DSL Reports: The average customer rating across five areas. These five areas are: 1) Pre-Sales Information, 2) Install Coordination,  3) Connection reliability, 4) Tech Support and 5) Value for money. Data were pulled from the site on 6/30/2013. Ratings are based on a 5-point scale.

As you can see in Table 1, there is much missing data for some of the 27 ISPs. The missing data do not necessarily reflect the quality of the data that appear in the table. These sources simply did not collect data to provide reliable ratings for each ISP or simply did not attempt to collect data for each ISP. The descriptive statistics for and correlations among the study variables appear in Table 2.

Table 2. Descriptive Statistics of and Correlations among Study Variables
Table 2. Descriptive Statistics of and Correlations among Study Variables

It’s all about Speed

Customer experience management research tells us that one way of improving satisfaction is to improve the customer experience. We see that actual speed of the ISP is positively related to most customer ratings, suggesting that ISPs that have faster speed also have customers who are more satisfied with them compared to ISPs who have slower speeds. The only exception with this is for satisfaction with Fees; ISPs with faster actual speed tend to have customers who are less satisfied with Fees compared to ISPs with slower actual speed.

Nate-Silvering the Data

Table 3. Rescaled Values of Customer Loyalty Metrics for Internet Service Providers
Table 3. Rescaled Values of Customer Loyalty Metrics for Internet Service Providers

Recall that Nate Silver aggregated several polls to make accurate predictions about the results of the 2012 presidential elections. Even though different polls, due to sampling error, had different outcomes (sometimes Obama won, sometimes Romney won), the aggregation of different polls resulted in a clearer picture of who was really likely to win.

In the current study, we have five different survey vendors (ASCI, Temkin, JD Power, PCMag and DSLREPORTS.com) assessing customer satisfaction with ISPs. Depending on what survey vendor you use, the ranking of ISPs differ. We can get a clearer picture of the ranking by combining the different data sources because a single study is less reliable than the combination of many different studies. While the outcome of aggregating customer surveys may not be as interesting as aggregating presidential polls, the general approach that Silver used to aggregate different results can be applied to the current data (I call it Nate-Silvering the data).

Given that the average correlations among the loyalty-related metrics in Table 2 are rather high (average r = .77; median r = .87), aggregating each metric to form an Overall Advocacy Loyalty metric makes mathematical sense. This overall score would be a much more reliable indicator of the quality of an ISP than any single rating by itself.

To facilitate the aggregation process, I first transformed the customer ratings to a common scale, a 100 -point scale using the following methods. I transformed the Temkin Ratings (a net score) into mean scores based on a mathematical model developed for this purpose (see: The Best Likelihood to Recommend Metric: Mean Score or Net Promoter Score?). This value was then multiplied by 10. The remaining metrics were transformed into a 100-point scale by using a multiplicative function of 20 (JD Power, DSLREPORTS) and 10 (PCMag Sat, PCMag Rec). These rescaled values are located in Table 3. While the transformation altered the average of each metric, these transformations did not appreciably alter the correlations among the metrics (average r = .75, median r = .82).

Table 4. Rankings of Internet Service Providers based on the average loyalty ratings.
Table 4. Rankings of Internet Service Providers based on the average loyalty ratings.

The transformed values were averaged for each of the ISPs. These results appear in Table 4. As seen in this table, the top 5 rated ISPs (overall advocacy ratings) are:

  1. WOW!
  2. Verizon FiOS
  3. Cablevision
  4. Earthlink
  5. Bright House

The bottom 5 rated ISPs (overall advocacy ratings) are:

  1. Windstream
  2. CenturyLink
  3. Frontier
  4. WildBlue
  5. HughesNet

Summary

Small Data, like its big brother, can provide good insight (with the help of right analytics, of course) about a given topic. By combining small data sets about ISPs, I was able to show that:

  1. Actual ISP speed is related to customer satisfaction with speed of ISP. ISPs that have objectively faster speed receive higher ratings on satisfaction with speed.
  2. Different survey vendors provide reliable and valid results about customer satisfaction with ISPs (there was a high correlation among different survey vendors).
  3. Improving customer loyalty with ISPs is a function of actual ISP speed.

The bottom line is that you shouldn’t forget the value of small data.

Source: Nate-Silvering Small Data Leads to Internet Service Provider (ISP) industry insights

#Compliance and #Privacy in #Health #Informatics by @BesaBauta

#Compliance and #Privacy in #Health #Informatics by @BesaBauta

In this podcast @BesaBauta from MeryFirst talks about the compliance and privacy challenges faced in hyper regulated industry. With her experience in health informatics, Besa shared some best practices and challenges that are faced by data science groups in health informatics and other similar groups in regulated space. This podcast is great for anyone looking to learn about data science compliance and privacy challenges.

Besa’s Recommended Read:
The Art Of War by Sun Tzu and Lionel Giles https://amzn.to/2Jx2PYm

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Besa’s BIO:
Dr. Besa Bauta is the Chief Data Officer and Chief Compliance Officer for MercyFirst, a social service organization providing health and mental health services to children and adolescents in New York City. She oversees the Research, Evaluation, Analytics, and Compliance for Health (REACH) division, including data governance and security measures, analytics, risk mitigation, and policy initiatives.
She is also an Adjunct Assistant Professor at NYU, and previously worked as a Research Director for a USAID project in Afghanistan, and as the Senior Director of Research and Evaluation at the Center for Evidence-Based Implementation and Research (CEBIR). She holds a Ph.D. in implementation science with a focus on health services, an MPH in Global Health and an MSW. Her research has focused on health systems, mental health, and integration of technology to improve population-level outcomes.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Source: #Compliance and #Privacy in #Health #Informatics by @BesaBauta by v1shal

Big Data in China Is a Big Deal

Big data means different things in different regions – in China retailers are finding ways to make it useful.

One thing Western brands have learned from expansion into the East is that Chinese shoppers are a discerning consumer group. They want genuine quality – fake items are no longer acceptable, value (demonstrated by Single’s Days’ record-shattering sales levels), and VIP treatment.

They also spend a lot of money, with around 250 million of them parting with approximately $275 billioneach year for Internet purchases alone. That’s a massive 60 percent of all online purchases in Asia.

It’s a hugely significant retail market, and key to leveraging its potential is the intelligent use of the wealth of data gathered each time a shopper researches a product, visits a store, or makes a purchase.

“Big data” is often a loaded term – it can mean different things to different people, depending on the industry. But retailers have gone some way toward pinning it down and making it useful.

Targeting With Precision 

Perhaps the most important – and profitable – use of consumer data is extracting preferences and patterns of purchase and using the information to offer highly targeted value-added services and products.

For example, Alibaba’s Open Data Processing Service (ODPS), has allowed it to analyze millions of transactions and set up a highly effective loan service for small online businesses. Data from Alipay and Alibaba’s shopping sites, including purchases, reviews, and credit ratings, assesses a borrower’s ability to repay a loan.

The use of more than 100 computing models and around 80 billion data entries has allowed Alibaba to reduce the cost of lending to a fraction of the cost of a traditional bank loan.

Of course, this kind of accuracy in consumer targeting opens the door to clienteling and the personalized service customers in China are looking for – the ability to identify with precision the needs and wants of people looking for a superior service.

For a small fee, retailers can use the might of the ODPS’ processing power to identify trends, pinpoint key demographics, and plan future ranges and campaigns aimed to meet the exact requirements of their customers.

Tracking Rogue Traders

Concern over counterfeit products means that consumers are prepared to pay a premium for genuine Western items, and will choose online stores such as TMall, which have a reputation for trading in authentic brands. However, the proliferation of fake goods is still a problem, and businesses are turning to big data to help tackle the issue. Following a report by the Chinese government, an e-commerce union comprising key online firms has been established, designed to pool vendor data in an attempt to identify rogue traders through their online shops, transactions, and other sales activity.

The amount of detailed information available to sales platforms should, in theory, mean that there is simply nowhere left to hide for sellers with less than scrupulous product standards.

Transfer of Intelligence

According to the Chinese University of Hong Kong, the three biggest players in China’s online industry, known as “BAT” (Baidu, Alibaba, Tencent) are “sitting on a goldmine of big data.”

The potential for cross-industry application is huge – data integration and mining between retail and financial institutions, for example, will drive the future of both online and physical commerce.

Tapping into the skills and experience of BAT – Baidu alone has thousands of analysts assessing data every day – will give retailers and other industries unprecedented accuracy in profiling the people buying their products and using their services, rich in both detail and opportunity.

The bottom line is that, especially for retailers, big data is a big deal. Expect to see even more sophisticated targeting models and customer-centric business operations coming from China over the next year, thanks to the intelligent use of information.

Originally via “Big Data in China Is a Big Deal”

Originally Posted at: Big Data in China Is a Big Deal

Unraveling the Mystery of Big Data

Slide01
Synopsis:
Curious about the Big Data hype? Want to find out just how big, BIG is? Who’s using Big Data for what, and what can you use it for? How about the architecture underpinnings and technology stacks? Where might you fit in the stack? Maybe some gotchas to avoid? Lionel Silberman, a seasoned Data Architect spreads some light on it. A good and wholesome refresher into Big Data and what all it can do.
Our guest speaker:

Lionel Silberman,
Senior Data Architect, Compuware
Lionel Silberman has over thirty years of experience in big data product development. He has expert knowledge of relational databases, both internals and applications, performance tuning, modeling, and programming. His product and development experience encompasses the major RDBMS vendors, object-oriented, time-series, OLAP, transaction-driven, MPP, distributed and federated database applications, data appliances, NoSQL systems Hadoop and Cassandra, as well as data parallel and mathematical algorithm development. He is currently employed at Compuware, integrating enterprise products at the data level. All are welcome to join us.

Video:

Slideshare:

Source: Unraveling the Mystery of Big Data by v1shal

Voices in AI – Episode 80: A Conversation with Charlie Burgoyne

.voice-in-ai-byline-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-byline-embed span {
color: #FF6B00;
}

About this Episode

Episode 80 of Voices in AI features host Byron Reese and Charlie Burgoyne discussing the difficulty of defining AI and how computer intelligence and human intelligence intersect and differ.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought you by GigaOm and I’m Byron Reese. Today my guest is Charlie Burgoyne. He is the founder and CEO of Valkyrie Intelligence, a consulting firm with domain expertise in applied science and strategy. He’s also a general partner for Valkyrie Signals, an AI-driven hedge fund based in Austin, as well as the managing partner for Valkyrie labs, an AI credit company. Charlie holds a master’s degree in theoretical physics from Georgetown University and a bachelor’s in nuclear physics from George Washington University.

I had the occasion to meet Charlie when we shared a stage when we were talking about AI and about 30 seconds into my conversation with him I said we gotta get this guy on the show. And so I think ‘strap in’ it should be a fun episode. Welcome to the show Charlie.

Charlie Burgoyne: Thanks so much Byron for having me, excited to talk to you today.

Let’s start with [this]: maybe re-enact a little bit of our conversation when we first met. Tell me how you think of artificial intelligence, like what is it? What is artificial about it and what is intelligent about it?

Sure, so the further I get down in this field, I start thinking about AI with two different definitions. It’s a servant with two masters. It has its private sector, applied narrowband applications where AI is really all about understanding patterns that we perform and that we capitalize on every day and automating those — things like approving time cards and making selections within a retail environment. And that’s really where the real value of AI is right now in the market and [there’s] a lot of people in that space who are developing really cool algorithms that capitalize on the potential patterns that exist and largely lay dormant in data. In that definition, intelligence is really about the cycles that we use within a cognitive capability to instrument our life and it’s artificial in that we don’t need an organic brain to do it.

Now the AI that I’m obsessed with from a research standpoint (a lot of academics are and I know you are as well Byron) — that AI definition is actually much more around the nature of intelligence itself, because in order to artificially create something, we must first understand it in its primitive state and its in its unadulterated state. And I think that’s where the bulk of the really fascinating research in this domain is going, is just understanding what intelligence is, in and of itself.

Now I’ll come kind of straight to the interesting part of this conversation, which is I’ve had not quite a hundred guests on the show. I can count on one hand the number who think it may not be possible to build a general intelligence. According to our conversation, you are convinced that we can do it. Is that true? And if so why?

Yes… The short answer is I am not convinced we can create a generalized intelligence, and that’s become more and more solidified the deeper and deeper I go into research and familiarity with the field. If you really unpack intelligent decision making, it’s actually much more complicated than a simple collection of gates, a simple collection of empirically driven singular decisions, right? A lot of the neural network scientists would have us believe that all decisions are really the right permutation of weighted neurons interacting with other layers of weighted neurons.

From what I’ve been able to tell so far with our research, either that is not getting us towards the goal of creating a truly intelligent entity or it’s doing the best within the confines of the mechanics we have at our disposal now. In other words, I’m not sure whether or not the lack of progress towards a true generalized intelligence is due to the fact that (a) the digital environment that we have tried to create said artificial intelligence in is unamenable to that objective or (b) the nuances that are inherent to intelligence… I’m not positive yet those are things through which we have an understanding of modeling, nor would we ever be able to create a way of modeling that.

I’ll give you a quick example: If we think of any science fiction movie that encapsulates the nature of what AI will eventually be, whether it’s Her, or Ex Machina or Skynet or you name it. There are a couple of big leaps that get glossed over in all science fiction literature and film, and those leaps are really around things like motivation. What motivates an AI, like what truly at its core motivates AI like the one in Ex Machina to leave her creator and to enter into the world and explore? How is that intelligence derived from innate creativity? How are they designing things? How are they thinking about drawings and how are they identifying clothing that they need to put on? All these different nuances that are intelligently derived from that behavior. We really don’t have a good understanding of that, and we’re not really making progress towards an understanding of that, because we’ve been distracted for the last 20 years with research in fields of computer science that aren’t really that closely related to understanding those core drivers.

So when you say a sentence like ‘I don’t know if we’ll ever be able to make a general intelligence,’ ever is a long time. So do you mean that literally? Tell me a scenario in which it is literally impossible — like it can’t be done, even if you came across a genie that could grant your wish. It just can’t be done. Like maybe time travel, you know — back in time, it just may not be possible. Do you mean that ‘may not’ be possible? Or do you just mean on a time horizon that is meaningful to humans?

I think it’s on the spectrum between the two. But I think it leans closer towards ‘not ever possible under any condition.’ I was at a conference recently and I made this claim which admittedly as any claim with this particular question would be based off of intuition and experience which are totally fungible assets. But I made this claim that I didn’t think it was ever possible, and something the audience asked me, well, have you considered meditating to create a synthetic AI? And the audience laughed and I stopped and I said: “You know that’s actually not the worst idea I’ve been exposed to.” That’s not the worst potential solution for understanding intelligence to try and reverse engineer my own brain with as little distractions from its normal working mechanics as possible. That may very easily be a credible aid to understanding how the brain works.

If we think about gravity, gravity is not a bad analog. Gravity is this force that everybody and their mother who’s older than, you know who’s past fifth grade understands how it works, you drop an apple you know which direction it’s going to go. Not only that but as you get experienced you can have a prediction of how fast it will fall, right? If you were to see a simulation drop an apple and it takes twelve seconds to hit the ground, you’d know that that was wrong, even if the rest of the vector was correct, the scaler is off a little bit. Right?

The reality is that we can’t create an artificial gravity environment, right? We can create forces that simulate gravity. Centrifugal force is not a bad way of replicating gravity but we don’t actually know enough about the underlying mechanics that guide gravity such that we could create an artificial gravity using the same techniques, relatively the same mechanics that are used in organic gravity. In fact it was only a year and a half ago or so closer to two years now where the Nobel Prize for Physics was awarded to the individuals who identified that it was gravitational waves that permeate gravity (actually that’s how they do gravitons), putting to rest an argument that’s been going on since Einstein truly.

So I guess my point is that we haven’t really made progress in understanding the underlying mechanics, and every step we’ve taken has proven to be extremely valuable in the industrial sector but actually opened up more and more unknowns in the actual inner workings of intelligence. If I had to bet today, not only is the time horizon on a true artificial intelligence extremely long-tailed but I actually think that it’s not impossible that it’s completely impossible altogether.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

.voice-in-ai-link-back-embed {
font-size: 1.4rem;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black;
background-position: center;
background-size: cover;
color: white;
padding: 1rem 1.5rem;
font-weight: 200;
text-transform: uppercase;
margin-bottom: 1.5rem;
}

.voice-in-ai-link-back-embed:last-of-type {
margin-bottom: 0;
}

.voice-in-ai-link-back-embed .logo {
margin-top: .25rem;
display: block;
background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768×264.png) center left no-repeat;
background-size: contain;
width: 100%;
padding-bottom: 30%;
text-indent: -9999rem;
margin-bottom: 1.5rem
}

@media (min-width: 960px) {
.voice-in-ai-link-back-embed .logo {
width: 262px;
height: 90px;
float: left;
margin-right: 1.5rem;
margin-bottom: 0;
padding-bottom: 0;
}
}

.voice-in-ai-link-back-embed a:link,
.voice-in-ai-link-back-embed a:visited {
color: #FF6B00;
}

.voice-in-ai-link-back a:hover {
color: #ff4f00;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links {
margin-left: 0 !important;
margin-right: 0 !important;
margin-bottom: 0.25rem;
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link,
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited {
background-color: rgba(255, 255, 255, 0.77);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover {
background-color: rgba(255, 255, 255, 0.63);
}

.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo {
display: inline;
width: auto;
fill: currentColor;
height: 1em;
margin-bottom: -.15em;
}

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source by analyticsweekpick