Important Strategies to Enhance Big Data Access

When you encounter apparently unmanageable and insurmountable Big Data sets, it seems practically impossible to get easy access to the correct data. The fact is, Big Data management could prove to be highly tricky and challenging and may come up with a few issues. However, effective data access could still be attained. Here are a few strategies for effectively achieving superlative data connectivity.

Understanding Hadoop

Hadoop is actually an ecosystem that has been designed for helping organizations in storing mammoth quantities of Big Data. It is important for you to have a sound understanding of ways to successfully bring your Big Data into and take it out of Hadoop so that you could effectively move ahead, as companies are involved in handling challenges of integrating Big Data within already existing data-infrastructure.

Integrating Cloud Data with Already Existing Reporting Applications

Integrating Cloud Data with already existing reporting applications such as Salesforce Dx has totally transformed the way you perceive and work with your customer data. These systems would, however, could face certain complications in acquiring the real-time reports. You would be relying on these reports for perfect business decision-making, thus generating the demand for an effective solution that allows such kind of real-time reporting.

Do Not Let the Sheer Scale of Big Data Get to You

Big Data could be hugely advantageous for businesses but if your organization is not ready to effectively handle it, you may have to do without the business value Big Data actually has on offer for you. Some organizations have the necessary scalable, flexible data infrastructure required for exploiting Big Data in order to achieve crucial business insight.

Access Salesforce Data via SQL

Salesforce data actually provides great value for numerous organizations; however, access issues could prove to be major obstacles in the way of organizations from reaping the fullest possible advantage. However, now businesses could effectively have an easy access to Salesforce data through ODBC and SQL. These smart drivers would be allowing you to create a connection and start implementing your queries in just a few minutes.

Do Accurate Analysis of Big Data

You could get a greater accuracy of big data depending on the technology utilized. There are several Big Data platforms that could be chosen by a company such as Apache Spark and Hadoop could come up with unique and accurate analysis of Big Data sets. More cutting-edge Big Data technology would be successfully generating more state-of-the-art Big Data models. Many organizations would be opting for a reliable Big Data provider. There are a great variety of options open to them and so businesses today could easily locate a Big Data provider that is suitable for their specific requirements and that comes up with accurate results or precise outcomes.

Conclusion

Business organizations must take extra initiative in assessing and analyzing the data collected by them. They must make sure that the data is collected from an authentic and reliable source. They must identify the context behind data generation. Every move involved in the analysis process requires being observed carefully right from the proper data ingestion to its enrichment and preparation. Data protection from external interference is essential.

Author Bio:

Sujain Thomas is a Salesforce consultant and discusses the benefits of Salesforce Dx in her blog posts. She is an avid blogger and has an impressive fan base.

 

Source: Important Strategies to Enhance Big Data Access

Nov 23, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Ethics  Source

[ AnalyticsWeek BYTES]

>> Apr 27, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> How AI is hacking humanity! Lesson from #Brexit & #Election2016 by v1shal

>> Startup Movement Vs Momentum, a Classic Dilemma by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Ditching Engagement in Favor of Blunt-Force Awareness Is a Temptation Marketers Must Avoid – Adweek Under  Social Analytics

>>
 Big Data Set to Get Much Bigger by 2021 – Which-50 (blog) Under  Big Data

>>
 Weak cyber-security protocols can rob companies off clients say experts – Exchange4Media Under  cyber security

More NEWS ? Click Here

[ FEATURED COURSE]

Hadoop Starter Kit

image

Hadoop learning made easy and fun. Learn HDFS, MapReduce and introduction to Pig and Hive with FREE cluster access…. more

[ FEATURED READ]

On Intelligence

image

Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one strok… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:Is it better to design robust or accurate algorithms?
A: A. The ultimate goal is to design systems with good generalization capacity, that is, systems that correctly identify patterns in data instances not seen before
B. The generalization performance of a learning system strongly depends on the complexity of the model assumed
C. If the model is too simple, the system can only capture the actual data regularities in a rough manner. In this case, the system poor generalization properties and is said to suffer from underfitting
D. By contrast, when the model is too complex, the system can identify accidental patterns in the training data that need not be present in the test set. These spurious patterns can be the result of random fluctuations or of measurement errors during the data collection process. In this case, the generalization capacity of the learning system is also poor. The learning system is said to be affected by overfitting
E. Spurious patterns, which are only present by accident in the data, tend to have complex forms. This is the idea behind the principle of Occam’s razor for avoiding overfitting: simpler models are preferred if more complex models do not significantly improve the quality of the description for the observations
Quick response: Occam’s Razor. It depends on the learning task. Choose the right balance
F. Ensemble learning can help balancing bias/variance (several weak learners together = strong learner)
Source

[ VIDEO OF THE WEEK]

RShiny Tutorial: Turning Big Data into Business Applications

 RShiny Tutorial: Turning Big Data into Business Applications

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

The data fabric is the next middleware. – Todd Papaioannou

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

According to Twitter’s own research in early 2012, it sees roughly 175 million tweets every day, and has more than 465 million accounts.

Sourced from: Analytics.CLUB #WEB Newsletter

Please share your thoughts about Steve Jobs

Everybody has an opinion about Steve Jobs. Please tell me what you think of him and how he has impacted your life in this brief survey.

Steve Jobs, co-founder of Apple, passed away earlier this week at the age of 56. In the process of writing about how he impacted my life in my blog, I created an image of him. To make this image, I collected quotes and articles that were written about him in the day following his passing. The quotes were from such notables like President Obama, Mark Zuckerberg, Guy Kawasaki, and Bill Gates, to name a few. Using these descriptive words of Steve Jobs, I created a word cloud in the form of his soon-to-be iconic image on Apple.com.

In the world cloud, the font size of the words is related to the frequency of usage of the words; the larger the font size, the more frequently that word is used to describe Steve Jobs. This picture essentially represents how these people define him, remember him.

I now want to be more purposeful in creating the same image using words from people who never met him but whose lives may have been impacted by him. Could you please complete my one-minute survey about Steve Jobs? I am also going to conduct sentiment analysis on your comments to understand the sentiment behind them. So… your survey responses help to create art and advance science. In addition to feeling good about yourself, I will notify you when this project is completed (if you provide your email address in the survey).

The more people who complete the survey, the more interesting the image(s) become(s) (e.g., look at age differences in sentiment). Please consider sharing the page using your social media savvy.
Thanks,

Bob E. Hayes, Ph.D.

Source: Please share your thoughts about Steve Jobs by bobehayes

Nov 16, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Insights  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> IT jobs to shift to new tech, data analytics, cloud services by analyticsweekpick

>> The Increasing Influence of Cloud Computing by jelaniharper

>> Factoid to Give Big-Data a Perspective by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Microsoft Azure customers now can run workloads on Cray supercomputers – ZDNet Under  Data Scientist

>>
 ‘Cyber security a major challenge for govt organisations’ – Hindu Business Line Under  cyber security

>>
 Master of machines: the rise of artificial intelligence calls for postgrad experts – The Guardian Under  Artificial Intelligence

More NEWS ? Click Here

[ FEATURED COURSE]

Applied Data Science: An Introduction

image

As the world’s data grow exponentially, organizations across all sectors, including government and not-for-profit, need to understand, manage and use big, complex data sets—known as big data…. more

[ FEATURED READ]

Rise of the Robots: Technology and the Threat of a Jobless Future

image

What are the jobs of the future? How many will there be? And who will have them? As technology continues to accelerate and machines begin taking care of themselves, fewer people will be necessary. Artificial intelligence… more

[ TIPS & TRICKS OF THE WEEK]

Keeping Biases Checked during the last mile of decision making
Today a data driven leader, a data scientist or a data driven expert is always put to test by helping his team solve a problem using his skills and expertise. Believe it or not but a part of that decision tree is derived from the intuition that adds a bias in our judgement that makes the suggestions tainted. Most skilled professionals do understand and handle the biases well, but in few cases, we give into tiny traps and could find ourselves trapped in those biases which impairs the judgement. So, it is important that we keep the intuition bias in check when working on a data problem.

[ DATA SCIENCE Q&A]

Q:You have data on the durations of calls to a call center. Generate a plan for how you would code and analyze these data. Explain a plausible scenario for what the distribution of these durations might look like. How could you test, even graphically, whether your expectations are borne out?
A: 1. Exploratory data analysis
* Histogram of durations
* histogram of durations per service type, per day of week, per hours of day (durations can be systematically longer from 10am to 1pm for instance), per employee…
2. Distribution: lognormal?

3. Test graphically with QQ plot: sample quantiles of log(durations)log?(durations) Vs normal quantiles

Source

[ VIDEO OF THE WEEK]

@BrianHaugli @The_Hanover ?on Building a #Leadership #Security #Mindset #FutureOfData #Podcast

 @BrianHaugli @The_Hanover ?on Building a #Leadership #Security #Mindset #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

In God we trust. All others must bring data. – W. Edwards Deming

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Retailers who leverage the full power of big data could increase their operating margins by as much as 60%.

Sourced from: Analytics.CLUB #WEB Newsletter

Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures

Patient experience (PX) has become an important topic for US hospitals. The Centers for Medicare & Medicaid Services (CMS) will be using patient feedback about their care as part of their reimbursement plan for acute care hospitals (see Hospital Value-Based Purchasing Program). Not surprisingly, hospitals are focusing on improving the patient experience to ensure they receive the maximum of their incentive payments. Additionally, US hospitals track other types of metrics (e.g., process of care and mortality rates) as measures of quality of care.

Given that hospitals have a variety of metrics at their disposal, it would be interesting to understand how these different metrics are related with each other. Do hospitals that receive higher PX ratings (e.g., more satisfied patients) also have better scores on other metrics (lower mortality rates, better process of care measures) than hospitals with lower PX ratings? In this week’s post, I will use the following hospital quality metrics:

  1. Patient Experience
  2. Health Outcomes (mortality rates, re-admission rates)
  3. Process of Care

I will briefly cover each of these metrics below.

Table 1. Descriptive Statistics for PX, Health Outcomes and Process of Care Metrics for US Hospitals (acute care hospitals only)

1. Patient Experience

Patient experience (PX) reflects the patients’ perceptions about their recent inpatient experience. PX is collected by a survey known as HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems). HCAHPS (pronounced “H-caps“) is a national, standardized survey of hospital patients and was developed by a partnership of public and private organizations and was created to publicly report the patient’s perspective of hospital care.

The survey asks a random sample of recently discharged patients about important aspects of their hospital experience. The data set includes patient survey results for over 3800 US hospitals on ten measures of patients’ perspectives of care (e.g., nurse communication, pain well controlled). I combined two general questions (Overall hospital rating and recommend) to create a patient advocacy metric. Thus, a total of 9 PX metrics were used. Across all 9 metrics, hospital scores can range from 0 (bad) to 100 (good). You can see the PX measures for different US hospital here.

2. Process of Care

Process of care measures show, in percentage form or as a rate, how often a health care provider gives recommended care; that is, the treatment known to give the best results for most patients with a particular condition. The process of care metric is based on medical information from patient records that reflects the rate or percentage across 12 procedures related to surgical care.  Some of these procedures are related to antibiotics being given/stopped at the right times and treatments to prevent blood clots.  These percentages were translated into scores that ranged from 0 (worse) to 100 (best).  Higher scores indicate that the hospital has a higher rate of following best practices in surgical care. Details of how these metrics were calculated appear below the map.

I calculated an overall Process of Care Metric by averaging each of the 12 process of care scores. The process of care metric was used because it has good measurement properties (internal consistency was .75) and, thus reflects a good overall measure of process of care. You can see the process of care measures for different US hospital here.

3. Health Outcomes

Measures that tell what happened after patients with certain conditions received hospital care are called “Outcome Measures.” We use two general types of outcome measures: 1) 30-day Mortality Rate and 2) 30-day Readmission Rate. The 30-day risk-standardized mortality and 30-day risk-standardized readmission measures for heart attack, heart failure, and pneumonia are produced from Medicare claims and enrollment data using sophisticated statistical modeling techniques that adjust for patient-level risk factors and account for the clustering of patients within hospitals.

The death rates focus on whether patients died within 30 days of their hospitalization. The readmission rates focus on whether patients were hospitalized again within 30 days.

Three mortality rate and readmission rate measures were included in the healthcare dataset for each hospital. These were:

  1. 30-Day Mortality Rate / Readmission Rate from Heart Attack
  2. 30-Day Mortality Rate / Readmission Rate from Heart Failure
  3. 30-Day Mortality Rate / Readmission Rate from Pneumonia

Mortality/Readmission rate is measured in units of 1000 patients. So, if a hospital has a heart attack mortality rate of 15, that means that for every 1000 heart attack patients, 15 of them die get readmitted. You can see the health outcome measures for different US hospital here.

Table 2. Correlations of PX metrics with Health Outcome and Process of Care Metrics for US Hospitals (acute care hospitals only).

Results

The three types of metrics (e.g., PX, Health Outcomes, Process of Care) were housed in separate databases on the data.medicare.gov site. As explained elsewhere in my post on Big Data, I linked these three data sets together by hospital name. Basically, I federated the necessary metrics from their respective database and combined them into a single data set.

Descriptive statistics for each variable are located in Table 1. The correlations of each of the PX measures with each of the Health Outcome and Process of Care Measures is located in Table 2. As you can see, the correlations of PX with other hospital metrics is very low, suggesting that PX measures are assessing something quite different than the Health Outcome Measures and Process of Care Measures.

Patient Loyalty and Health Outcomes and Process of Care

Patient loyalty/advocacy (as measured by the Patient Advocacy Index) is logically correlated with the other measures (except for Death Rate from Heart Failure). Hospitals that have higher patient loyalty ratings have lower death rates, readmission rates and higher levels of process of care. The degree of relationship, however, is quite small (the percent of variance explained by patient advocacy is only 3%).

Patient Experience and Health Outcomes and Process of Care

Patient experience (PX) shows a complex relationship with health outcome and process of care measures. It appears that hospitals that have higher PX ratings also report higher death rates. However, as expected, hospitals that have higher PX ratings report lower readmission rates. Although statistically significant, all of the correlations of PX metrics with other hospitals metrics are low.

The PX dimension that had the highest correlation with readmission rates and process of care measures was “Given Information about my Recovery upon discharge“.  Hospitals who received high scores on this dimensions also experienced lower readmission rates and higher process of care scores.

Summary

Hospitals are tracking different types of quality metrics, metrics being used to evaluate each hospital’s performance. Three different metrics for US hospitals were examined to understand how well they are related to each other (there are many other metrics on which hospitals can be compared). Results show that the patient experience and patient loyalty are only weakly related to other hospital metrics, suggesting that improving the patient experience will have little impact on other hospital measures (health outcomes, process of care).

 

Originally Posted at: Evaluating Hospital Quality using Patient Experience, Health Outcomes and Process of Care Measures by bobehayes

Why Using the ‘Cloud’ Can Undermine Data Protections

By Jack Nicas

While the increasing use of encryption helps smartphone users protect their data, another sometime related technology, cloud computing, can undermine those protections.

The reason: encryption can keep certain smartphone data outside the reach of law enforcement. But once the data is uploaded to companies’ computers connected to the Internet–referred to as “the cloud”–it may be available to authorities with court orders.
“The safest place to keep your data is on a device that you have next to you,” said Marc Rotenberg, head of the Electronic Privacy Information Center. “You take a bit of a risk when you back up your device. Once you do that it’s on another server.”

Encryption and cloud computing “are two competing trends,” Mr. Rotenberg said. “The movement to the cloud has created new privacy risks for users and businesses. Encryption does offer the possibility of restoring those safeguards, but it has to be very strong and it has to be under the control of the user.”

Apple is fighting a government request that it help the Federal Bureau of Investigation unlock the iPhone of Syed Rizwan Farook, the shooter in the December terrorist attack in San Bernardino, Calif.

The FBI believes the phone could contain photos, videos and records of text messages that Mr. Farook generated in the final weeks of his life.

The data produced before then? Apple already provided it to investigators, under a court search warrant. Mr. Farook last backed up his phone to Apple’s cloud service, iCloud, on Oct. 19.

Encryption scrambles data to make it unreadable until accessed with the help of a unique key. The most recent iPhones and Android phones come encrypted by default, with a user’s passcode activating the unique encryption key stored on the device itself. That means a user’s contacts, photos, videos, calendars, notes and, in some cases, text messages are protected from anyone who doesn’t have the phone’s passcode. The list includes hackers, law enforcement and even the companies that make the phones’ software: Apple and Google.

However, Apple and Google software prompt users to back up their devices on the cloud. Doing so puts that data on the companies’ servers, where it is more accessible to law enforcement with court orders.

Apple says it encrypts data stored on its servers, though it holds the encryption key. The exception is so-called iCloud Keychain data that stores users’ passwords and credit-card information; Apple says it can’t access or read that data.

Officials appear to be asking for user data more often. Google said that it received nearly 35,000 government requests for data in 2014 and that it complies with the requests in about 65% of cases. Apple’s data doesn’t allow for a similar comparison since the company reported the number of requests from U.S. authorities in ranges in 2013.

Whether they back up their smartphones to the cloud, most users generate an enormous amount of data that is stored outside their devices, and thus more accessible to law enforcement.

“Your phone is an incredibly intricate surveillance device. It knows everyone you talk to, where you are, where you live and where you work,” said Bruce Schneier, chief technology officer at cybersecurity firm Resilient Systems Inc. “If you were required to carry one by law, you would rebel.”

Google, Yahoo Inc. and others store users’ emails on their servers. Telecom companies keep records of calls and some standard text messages.
Facebook
Inc. and Twitter Inc. store users’ posts, tweets and connections.

Even Snapchat Inc., the messaging service known for photo and video messages that quickly disappear, stores some messages. The company says in its privacy policy that “in many cases” it automatically deletes messages after they are viewed or expire. But it also says that “we may also retain certain information in backup for a limited period or as required by law” and that law enforcement sometimes requires it “to suspend our ordinary server-deletion practices for specific information.”

Snapchat didn’t respond to a request for comment.

Write to Jack Nicas at jack.nicas@wsj.com
(END) Dow Jones Newswires
02-18-161938ET
Copyright (c) 2016 Dow Jones & Company, Inc.

Source: Why Using the ‘Cloud’ Can Undermine Data Protections by analyticsweekpick

Nov 09, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Accuracy check  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Surviving the Internet of Things by v1shal

>> Map of US Hospitals and their Health Outcome Metrics by bobehayes

>> Eradicating Silos Forever with Linked Enterprise Data by jelaniharper

Wanna write? Click Here

[ NEWS BYTES]

>>
 The Importance of TSP Snapshot Statistics – FEDweek Under  Statistics

>>
 World’s largest data center to be built in Arctic Circle – CNBC Under  Data Center

>>
 Hybrid cloud and blockchain solutions will be the future for data … – Information Age Under  Hybrid Cloud

More NEWS ? Click Here

[ FEATURED COURSE]

A Course in Machine Learning

image

Machine learning is the study of algorithms that learn from data and experience. It is applied in a vast variety of application areas, from medicine to advertising, from military to pedestrian. Any area in which you need… more

[ FEATURED READ]

Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking

image

Written by renowned data science experts Foster Provost and Tom Fawcett, Data Science for Business introduces the fundamental principles of data science, and walks you through the “data-analytic thinking” necessary for e… more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:What is statistical power?
A: * sensitivity of a binary hypothesis test
* Probability that the test correctly rejects the null hypothesis H0H0 when the alternative is true H1H1
* Ability of a test to detect an effect, if the effect actually exists
* Power=P(reject H0|H1istrue)
* As power increases, chances of Type II error (false negative) decrease
* Used in the design of experiments, to calculate the minimum sample size required so that one can reasonably detects an effect. i.e: ‘how many times do I need to flip a coin to conclude it is biased?’
* Used to compare tests. Example: between a parametric and a non-parametric test of the same hypothesis

Source

[ VIDEO OF THE WEEK]

Data-As-A-Service (#DAAS) to enable compliance reporting

 Data-As-A-Service (#DAAS) to enable compliance reporting

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

You can use all the quantitative data you can get, but you still have to distrust it and use your own intelligence and judgment. – Alvin Tof

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

571 new websites are created every minute of the day.

Sourced from: Analytics.CLUB #WEB Newsletter

Surge in real-time big data and IoT analytics is changing corporate thinking

Big data that can be immediately actionable in business decisions is transforming corporate thinking. One expert cautions that a mindset change is needed to get the most from these analytics.

Gartner reported in September 2014 that 73% of respondents in a third quarter 2014 survey had already invested or planned to invest in big data in the next 24 months. This was an increase from 64% in 2013.

The big data surge has fueled the adoption of Hadoop and other big data batch processing engines, but it is also moving beyond batch and into a real-time big data analytics approach.

Organizations want real-time big data and analytics capability because of an emerging need for big data that can be immediately actionable in business decisions. An example is the use of big data in online advertising, which immediately personalizes ads for viewers when they visit websites based on their customer profiles that big data analytics have captured.

“Customers now expect personalization when they visit websites,” said Jeff Kelley, a big data analytics analyst from Wikibon, a big data research and analytics company. “There are also other real-time big data needs in specific industry verticals that want real-time analytics capabilities.”

The financial services industry is a prime example. “Financial institutions want to cut down on fraud, and they also want to provide excellent service to their customers,” said Kelley. “Several years ago, if a customer tried to use his debit card in another country, he was often denied because of fears of fraud in the system processing the transaction. Now these systems better understand each customer’s habits and the places that he is likely to travel to, so they do a better job at preventing fraud, but also at enabling customers to use their debit cards without these cards being locked down for use when they travel abroad.”

Kelly believes that in the longer term this ability to apply real-time analytics to business problems will grow as the Internet of Things (IoT) becomes a bigger factor in daily life.

“The Internet of Things will enable sensor tacking of consumer type products in businesses and homes,” he said. “You will be collect and analyze data from various pieces of equipment and appliances and optimize performance.”

The process of harnessing IoT data is highly complex, and companies like GE are now investigating the possibilities. If this IoT data can be captured in real time and acted upon, preventive maintenance analytics can be developed to preempt performance problems on equipment and appliances, and it might also be possible for companies to deliver more rigorous sets of service level agreements (SLAs) to their customers.

Kelly is excited at the prospects, but he also cautions that companies have to change the way they view themselves and their data to get the most out of IoT advancement.

“There is a fundamental change of mindset,” he explained, “and it will require different ways of approaching application development and how you look at the business. For example, a company might have to redefine itself from thinking that it only makes ‘makes trains,’ to a company that also ‘services trains with data.'”

The service element, warranties, service contracts, how you interact with the customer, and what you learn from these customer interactions that could be forwarded into predictive selling are all areas that companies might need to rethink and realign in their business as more IoT analytics come online. The end result could be a reformation of customer relationship management (CRM) to a strictly customer-centric model that takes into account every aspect of the customer’s “life cycle” with the company — from initial product purchases, to servicing, to end of product life considerations and a new beginning of the sales cycle.

Originally posted via “Surge in real-time big data and IoT analytics is changing corporate thinking”

Originally Posted at: Surge in real-time big data and IoT analytics is changing corporate thinking by analyticsweekpick

Development of the Customer Sentiment Index: Lexical Differences

This is Part 2 of a series on the Development of the Customer Sentiment Index (see introduction, and Part 1). The CSI assesses the extent to which customers describe your company/brand with words that reflect positive or negative sentiment. This post covers the development of a judgment-based sentiment lexicon and compares it to empirically-based sentiment lexicons.

Last week, I created four sentiment lexicons for use in a new customer experience (CX) metric, the Customer Sentiment Index (CSI). The four sentiment lexicons were empirically derived using data from a variety of online review sites from IMDB, Goodreads, OpenTable and Amazon/Tripadvisor. This week, I develop a sentiment lexicon using a non-empirical approach.

Human Judgment Approach to Sentiment Classification

The judgment-based approach does not rely on data to derive the sentiment values; rather this method requires the use of subject matter experts to classify words into sentiment categories. This approach is time-consuming, requiring the subject matter experts to manually classify each of the thousands of words in our empirically-derived lexicons. To minimize the work required by the subject matter experts, an initial set of opinion words were generated using two studies.

In the first study, as part of an annual customer survey, a B2B technology company included an open-ended survey question, “Using one word, please describe COMPANY’S products/services.” From 1619 completed surveys, 894 customers provided an answer for the question. Many respondents used multiple words or the company’s name as their response, reducing the number of useful responses to be 689. Of these respondents, a total of 251 usable unique words were used by respondents.

Also, the customer survey included questions that required customers to provide ratings on measures of customer loyalty (e.g., overall satisfaction, likelihood to recommend, likelihood to buy different products, likelihood to renew) and satisfaction with the customer experience (e.g., product quality, sales process, ease of doing business, technical support).

In the second study, as part of a customer relationship survey, I solicited responses from customers of wireless service providers (B2C sample). The sample was obtained using Mechanical Turk by recruiting English-speaking participants to complete a short customer survey about their experience with their wireless service provider. In addition to the standard rated questions in the customer survey (e.g., customer loyalty, CX ratings), the following question was used to generate the one word opinion: “What one word best describes COMPANY? Please answer this question using one word.

From 469 completed surveys, 429 customers provided an answer for the question, Many respondents used multiple words or the company’s name as their response, reducing the number of useful responses to be 319. Of these respondents, a total of 85 usable unique words were used by respondents.

Sentiment Rating of Opinion Words

The list of customer-generated words for each sample was independently rated by the two experts. I was one of those experts. My good friend and colleague was the other expert. We both hold a PhD in industrial-organizational psychology and specialize in test development (him) and survey development (me). We have extensive graduate-level training on the topics of statistics and psychological measurement principles. Also, we have applied experience, helping companies gain value from psychological measurements. We each have over 20 years of experience in developing/validating tests and surveys.

For each list of words (N = 251 and N = 85), each expert was given the list of words and was instructed to “rate each word on a scale from 0 to 10; where 0 is most negative sentiment/opinion and 10 is most positive sentiment/opinion; and 5 is the midpoint.” After providing their first rating of each word, each of the two raters were then given the opportunity to adjust their initial ratings for each word. For this process, each rater was given the list of 251 words with their initial rating and were asked to make any adjustments to their initial ratings.

Results of Human Judgment Approach to Sentiment Classification

Table 1.  Descriptive Statistics and Correlations of Sentiment Values across Two Expert Raters
Table 1. Descriptive Statistics and Correlations of Sentiment Values across Two Expert Raters

Descriptive statistics of and correlations among the expert-derived sentiment values of customer-generated words appears in Table 1. As you can see, the two raters assign very similar sentiment ratings to words for both sets. Average ratings were similar. Also, the inter-rater agreement between the two raters for the 251 words was r = .87 and for the 85 words was .88.

After slight adjustments, the inter-rater agreement between the two raters improved to r = .90 for the list of 251 words and .92 for the list of 85 words. This high inter-rater agreement indicated that the raters were consistent in their interpretation of the two lists of words with respect to sentiment.

Figure 1. Distribution of
Figure 1. Distribution of Sentiment Values of Customer-Generated Words using Subject Matter Experts’ Sentiment Lexicon

Because of the high agreement between the raters and comparable means between raters, an overall sentiment score for each word was calculated as the average of the raters’ second/adjusted rating (See Table 1 or Figure 2 for descriptive statistics for this metric).

Comparing Empirically-Derived and Expert-Derived Sentiment

In all, I have created five lexicons; four lexicons are derived empirically from four data sources (i.e., OpenTable, Amazon/Tripadvisor, Goodreads and IMDB) and one lexicon is derived using subject matter experts’ sentiment classification.

Table 2. Descriptive Statistics and Correlations among Sentiment Values of Customer-Generated Words across Five Sentiment Lexicons (N = 251)
Table 2. Descriptive Statistics and Correlations among Sentiment Values of Customer-Generated Words across Five Sentiment Lexicons (N = 251)

I compared these five lexicons to better understand the similarity and differences of each lexicon. I applied the four empirically-derived lexicons to each list of customer-generated words. So, in all, for each list of words, I have 5 sentiment scores.

The descriptive statistics of and correlations among the five sentiment scores for the 251 customer-generated words appears in Table 2. Table 3 houses the information for the 85 customer-generated words.

Table 3. Descriptive Statistics and Correlations among Sentiment Values of Customer-Generated Words across Five Sentiment Lexicons (N = 85)
Table 3. Descriptive Statistics and Correlations of among Sentiment Values of Customer-Generated Words across 5 Sentiment Lexicons (N=85)

As you can see, there is high agreement among the empirically-derived lexicons (average correlation = .65 for the list of 251 words and .79 for the list of 85 words.

There are statistically significant mean differences across the empirically-derived lexicons; Amazon/Tripadvisor has the highest average sentiment value and Goodreads has the lowest. Lexicons from IMDB and OpenTable provide similar means. The expert judgment lexicon provides the lowest average sentiment ratings for each list of customer-generated words. The absolute sentiment value of a word is dependent on the sentiment lexicon you use. So, pick a lexicon and use it consistently; changing your lexicon could change your metric.

Looking at the the correlations of the expert-derived sentiments with each of the empirically-derived sentiment, we see that OpenTable lexicon had higher correlation with the experts compared to the Goodreads lexicon. The pattern of results make sense. The OpenTable sample is much more similar to the sample on which the experts provided their sentiment ratings. OpenTable represents a customer/supplier relationship regarding a service while the Goodreads’ sample represents a different type of relationship (customer/book quality).

Summary and Conclusions

These two studies demonstrated that subject matter experts are able to scale words along a sentiment scale. There was high agreement among the experts in their classification.

Additionally, these judgment-derived lexicons were very similar to four empirically derived lexicons. Lexicons based on subject matter experts’ sentiment classification/scaling of words are highly correlated to empirically-derived lexicons. It appears that each of the five sentiment lexicons tells you roughly the same thing as the other lexicons.

The empirically-derived lexicons are less comprehensive than the subject matter experts’ lexicons regarding customer-generated words. By design, the subject matter experts classified all words that were generated by customers; some of the words that were used by the customers do not appear in the empirically-derived lexicons. For example, the OpenTable lexicon only represents 65% (164/251) of the customer-generated words for Study 1 and 71% (60/85) of the customer-generated words for Study 2. Using empirically-derived lexicons for the purposes of calculating the Customer Sentiment Index could be augmented using lexicons that are based on subject matter experts’ classification/scaling of words.

In the next post, I will continue presenting information about the validating the Customer Sentiment Index (CSI). So far, the analysis shows that the sentiment scores of the CSI are reliable (we get similar results using different lexicons). We now need to understand what the CSI is measuring. I will show this by examining the correlation of the CSI with other commonly used customer metrics, including likelihood to recommend (e.g., NPS), overall satisfaction and CX ratings of important customer touch points (e.g., product quality, customer service). Examining correlations of this nature will also shed light on the usefulness of the CSI in a business setting.

Originally Posted at: Development of the Customer Sentiment Index: Lexical Differences by bobehayes

Nov 02, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data shortage  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Malaysia opens digital government lab for big data analytics by analyticsweekpick

>> Sep 07, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> A Visual Approach to Data Management: The Transcendent Power of Data Visualizations by jelaniharper

Wanna write? Click Here

[ NEWS BYTES]

>>
 Big Data and Drone Tech Can Help Fight Famine – The Cipher Brief Under  Big Data

>>
 New Jersey Resources Corp (NYSE:NJR) Institutional Investor Sentiment Analysis – Finance News Daily Under  Sentiment Analysis

>>
 Different types of virtualization – RCR Wireless – RCR Wireless News Under  Virtualization

More NEWS ? Click Here

[ FEATURED COURSE]

Artificial Intelligence

image

This course includes interactive demonstrations which are intended to stimulate interest and to help students gain intuition about how artificial intelligence methods work under a variety of circumstances…. more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Strong business case could save your project
Like anything in corporate culture, the project is oftentimes about the business, not the technology. With data analysis, the same type of thinking goes. It’s not always about the technicality but about the business implications. Data science project success criteria should include project management success criteria as well. This will ensure smooth adoption, easy buy-ins, room for wins and co-operating stakeholders. So, a good data scientist should also possess some qualities of a good project manager.

[ DATA SCIENCE Q&A]

Q:How do you take millions of users with 100’s transactions each, amongst 10k’s of products and group the users together in meaningful segments?
A: 1. Some exploratory data analysis (get a first insight)

* Transactions by date
* Count of customers Vs number of items bought
* Total items Vs total basket per customer
* Total items Vs total basket per area

2.Create new features (per customer):

Counts:

* Total baskets (unique days)
* Total items
* Total spent
* Unique product id

Distributions:

* Items per basket
* Spent per basket
* Product id per basket
* Duration between visits
* Product preferences: proportion of items per product cat per basket

3. Too many features, dimension-reduction? PCA?

4. Clustering:

* PCA

5. Interpreting model fit
* View the clustering by principal component axis pairs PC1 Vs PC2, PC2 Vs PC1.
* Interpret each principal component regarding the linear combination it’s obtained from; example: PC1=spendy axis (proportion of baskets containing spendy items, raw counts of items and visits)

Source

[ VIDEO OF THE WEEK]

@AngelaZutavern & @JoshDSullivan @BoozAllen discussed Mathematical Corporation #FutureOfData

 @AngelaZutavern & @JoshDSullivan @BoozAllen discussed Mathematical Corporation #FutureOfData

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

He uses statistics as a drunken man uses lamp posts—for support rather than for illumination. – Andrew Lang

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MichOConnell, @Tibco

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MichOConnell, @Tibco

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Estimates suggest that by better integrating big data, healthcare could save as much as $300 billion a year — that’s equal to reducing costs by $1000 a year for every man, woman, and child.

Sourced from: Analytics.CLUB #WEB Newsletter