Dec 26, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
SQL Database  Source

[ AnalyticsWeek BYTES]

>> #FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency – Playcast – Data Analytics Leadership Playbook Podcast by v1shal

>> Effects of Labeling the Neutral Response in the NPS by analyticsweek

>> Thought The Internet Was Life-Changing? Internet of Things Is Here by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

Python for Beginners with Examples

image

A practical Python course for beginners with examples and exercises…. more

[ FEATURED READ]

Machine Learning With Random Forests And Decision Trees: A Visual Guide For Beginners

image

If you are looking for a book to help you understand how the machine learning algorithms “Random Forest” and “Decision Trees” work behind the scenes, then this is a good book for you. Those two algorithms are commonly u… more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:Give examples of bad and good visualizations?
A: Bad visualization:
– Pie charts: difficult to make comparisons between items when area is used, especially when there are lots of items
– Color choice for classes: abundant use of red, orange and blue. Readers can think that the colors could mean good (blue) versus bad (orange and red) whereas these are just associated with a specific segment
– 3D charts: can distort perception and therefore skew data
– Using a solid line in a line chart: dashed and dotted lines can be distracting

Good visualization:
– Heat map with a single color: some colors stand out more than others, giving more weight to that data. A single color with varying shades show the intensity better
– Adding a trend line (regression line) to a scatter plot help the reader highlighting trends

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with  John Young, @Epsilonmktg

 #BigData @AnalyticsWeek #FutureOfData #Podcast with John Young, @Epsilonmktg

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data that is loved tends to survive. – Kurt Bollacker, Data Scientist, Freebase/Infochimps

[ PODCAST OF THE WEEK]

@TimothyChou on World of #IOT & Its #Future Part 1 #FutureOfData #Podcast

 @TimothyChou on World of #IOT & Its #Future Part 1 #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

73% of organizations have already invested or plan to invest in big data by 2016

Sourced from: Analytics.CLUB #WEB Newsletter

Top 5 Ways AI Will Impact the Insurance Industry in 2019

I recently saw a tweet from Mat Velloso – “If it is written in Python, it’s probably machine learning. If it is written in PowerPoint, it’s probably AI.” This quote is probably the most accurate summarization of what has happened in AI over the past couple of years.

A few months back, The Economist shared the chart below that shows the number of CEOs who mentioned AI in their Earnings calls.

Towards the end of 2017, even Vladimir Putin said: “The nation that leads in AI ‘will be the ruler of the world.” Beyond all this hype, there is a lot of real technology that is being built.

So how is 2019 going to look for all of us in the insurance world?

Source: Top 5 Ways AI Will Impact the Insurance Industry in 2019 by analyticsweek

Dec 19, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Insights  Source

[ AnalyticsWeek BYTES]

>> State of #FutureOfData in 2017 by admin

>> Office Depot Stitches Together the Customer Journey Across Multiple Touchpoints by analyticsweek

>> How to Save and Reuse Data Preparation Objects in Scikit-Learn by administrator

Wanna write? Click Here

[ FEATURED COURSE]

Machine Learning

image

6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending … more

[ FEATURED READ]

The Black Swan: The Impact of the Highly Improbable

image

A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences. In this groundbreaking and prophetic book, Taleb shows in a playful way that Black Swan events explain almost eve… more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:How to optimize algorithms? (parallel processing and/or faster algorithms). Provide examples for both?
A: Premature optimization is the root of all evil – Donald Knuth

Parallel processing: for instance in R with a single machine.
– doParallel and foreach package
– doParallel: parallel backend, will select n-cores of the machine
– for each: assign tasks for each core
– using Hadoop on a single node
– using Hadoop on multi-node

Faster algorithm:
– In computer science: Pareto principle; 90% of the execution time is spent executing 10% of the code
– Data structure: affect performance
– Caching: avoid unnecessary work
– Improve source code level
For instance: on early C compilers, WHILE(something) was slower than FOR(;;), because WHILE evaluated “something” and then had a conditional jump which tested if it was true while FOR had unconditional jump.

Source

[ VIDEO OF THE WEEK]

Agile Data Warehouse Design for Big Data Presentation

 Agile Data Warehouse Design for Big Data Presentation

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Getting information off the Internet is like taking a drink from a firehose. – Mitchell Kapor

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

 #BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Every second we create new data. For example, we perform 40,000 search queries every second (on Google alone), which makes it 3.5 searches per day and 1.2 trillion searches per year.In Aug 2015, over 1 billion people used Facebook FB +0.54% in a single day.

Sourced from: Analytics.CLUB #WEB Newsletter

The Significance of Self-Service Analytics: Advice from Real Application Teams

It’s a seemingly obvious but often-missed point: Different application end users will want to use embedded analytics in different ways. If you’re like most application teams, you’ll have everyone from basic users who want easy-to-use, interactive dashboards, to power users who demand sophisticated capabilities like workflow integration and operational reporting.

How can you meet the needs of every end user? When you embed self-service analytics in your application’s dashboards and reports, you empower all your end users to get the data and analytics they need—without requiring constant technical assistance.

>> Related: 7 Questions Every Application Team Should Ask Before Choosing an Analytics Vendor <<

The fact is, end users don’t want to send multiple ad-hoc reporting requests to your development team. They’re much happier when they can get the information they need on their own. Users want to do whatever they want with their data—which makes self-service analytics an important feature for any application.

The application team at Fieldology learned this firsthand when they were building a new application. They decided to engage their users by putting them in control to get the data they need, create new dashboards and reports, and explore information on their own:

 

“We believe that success is not about the collection of data, but how you use that data to make an impact on the bottom line. Creating portals and pre-built reports was a great start, but it soon became clear that we would need to develop more ad-hoc and self-service reporting capabilities. We needed to expand our solutions so they could be used by everyone from senior management, analysts, account teams and the field staff.”

 – Paul King, Fieldology

 

End users aren’t the only ones who benefit from self-service analytics. Embedding self-service means reducing the backlog of custom requests for your developers and IT team. In fact, according to the 2018 State of Embedded Analytics Report, 49 percent of companies saw a drop in the number of ad-hoc requests from end users after embedding self-service analytics. Since users no longer have to come to you with every ad-hoc analytics request, it frees your application team up to focus on more important matters.

The application leaders at Signeture Solutions and Hylant were both happy with the dual benefits of self-service analytics:

 

“Many of my customers are business decision makers, and they don’t want to have to wait a week for IT to give them a specific chart or calculation. IT will always have its place for installation and maintenance. But now, IT can simply provide all the information to business users and allow them to make decisions independently based on what they see in the data.”  

– Barry Nicolaou, Signeture Solutions

 

 “We’ve been able to push back report requests from users to the business unit, and say ‘Hey, we gave you this data, and you can now utilize it as you see fit.’ They’re not asking for just a report. They actually have the data available to them and they can make their own correlations and build on their knowledge level.”

– Scott Lindsey, Hylant

 

Self-service analytics is a great way to engage end users and make your application stickier. Nearly 70 percent of application teams saw an increase in the time spent in their applications after they embedded self-service, as shown in the 2018 State of Embedded Analytics Report.

According to the team at iDfour, their users are so excited about self-service analytics they can barely even get through a demonstration:

 

“When we show a new customer the self-service tool, they usually want to start asking questions of their data right away – before I can even finish explaining the features. Once they see what’s possible, they want to dive in immediately.”

– Deanna Antosh, iDfour

 

Ready to embed self-service analytics in your application? Find out how Logi Analytics can help you engage end users and reduce the backlog of ad-hoc reporting requests. Watch a demo today >

 

Source

6 Ways Custom B2B Software Can Improve Your Business

Businesses increasingly understand the importance of going digital in this era of fast-evolving technology. Digitizing certain operations can improve overall business growth and add to the revenue of a business. A report by McKinsey from 2016 found that digital leaders in the B2B industry enjoyed 5X more revenue than other businesses. Businesses are rapidly looking […]

The post 6 Ways Custom B2B Software Can Improve Your Business appeared first on TechSpective.

Originally Posted at: 6 Ways Custom B2B Software Can Improve Your Business

Dec 12, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Pacman  Source

[ AnalyticsWeek BYTES]

>> What is data security For Organizations? by analyticsweekpick

>> As for Types of Chatbots: Deep Learning for Chatbot by administrator

>> New Research Reveals Best Self-Service Analytics Is Embedded by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz

image

Use data to build a better startup faster in partnership with Geckoboard… more

[ FEATURED READ]

The Black Swan: The Impact of the Highly Improbable

image

A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences. In this groundbreaking and prophetic book, Taleb shows in a playful way that Black Swan events explain almost eve… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:How do you assess the statistical significance of an insight?
A: * is this insight just observed by chance or is it a real insight?
Statistical significance can be accessed using hypothesis testing:
– Stating a null hypothesis which is usually the opposite of what we wish to test (classifiers A and B perform equivalently, Treatment A is equal of treatment B)
– Then, we choose a suitable statistical test and statistics used to reject the null hypothesis
– Also, we choose a critical region for the statistics to lie in that is extreme enough for the null hypothesis to be rejected (p-value)
– We calculate the observed test statistics from the data and check whether it lies in the critical region

Common tests:
– One sample Z test
– Two-sample Z test
– One sample t-test
– paired t-test
– Two sample pooled equal variances t-test
– Two sample unpooled unequal variances t-test and unequal sample sizes (Welch’s t-test)
– Chi-squared test for variances
– Chi-squared test for goodness of fit
– Anova (for instance: are the two regression models equals? F-test)
– Regression F-test (i.e: is at least one of the predictor useful in predicting the response?)

Source

[ VIDEO OF THE WEEK]

#FutureOfData with @theClaymethod, @TiVo discussing running analytics in media industry

 #FutureOfData with @theClaymethod, @TiVo discussing running analytics in media industry

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

I’m sure, the highest capacity of storage device, will not enough to record all our stories; because, everytime with you is very valuable da

[ PODCAST OF THE WEEK]

Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

 Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

More than 5 billion people are calling, texting, tweeting and browsing on mobile phones worldwide.

Sourced from: Analytics.CLUB #WEB Newsletter

How Telangana government authenticates beneficiaries using AI, ML, Big Data and Deep Learning tools

Aadhaar-less authentication being piloted in authenticating pensioners

Several government agencies across the country have started using Aadhaar identity to authenticate beneficiaries of different schemes. The Telangana government, which attracted criticism from privacy activists for allegedly using facial recognition software without legal sanction, has developed a method to authenticate its beneficiaries, using Artificial Intelligence, Machine Learning, Big Data and Deep Learning solutions.

To start with, the State government has implemented the RTDAI (Realtime Digital Authentication of Identity) in authenticating pensioners without any need for special hardware at the user end. It doesn’t require fingerprints or iris images to authenticate a person.

“All you need to have a smart phone to remotely authenticate yourself. Take a photo and upload it in the exclusive app launched for the purpose. The software will verify the photo and demographic details in the pension database, avoiding the need to authenticate people physically,” GT Venkateshwar Rao, Managing Director, Telangana State Technology Services (TSTS), has said.

 

The app is made available in T App Folio, the umbrella app for various e-governance offerings from Telangana Government.

Those who consent to use the IT-backed authentication solution can avail of the service.

The whole process takes just under one minute to find answers for the two crucial questions in authenticating a pensioner – Whether the pensioner in question is alive? Is she or he the legitimate pensioner?

The technological tools could address the challenges such as old pictures in the data base versus the latest photos of users. The system can take care of the changes happened over a period of time in faces. The same case with the names. For example, the system can match K Aseervadam with Aseervadam Karra as it checks other parameters available in the data available with the department in question.

Three-factor authentication

The Pensioners Life Certificate Authentication using a Selfie (PLCS) method deploys three levels of authentication – demographics (name, father’s name and address), photo and liveness. The TSTS is using an Artificial Intelligence-based liveness check solution developed by a Bengaluru start-up and a Machine Learning-based demographic comparison solution.

Post one-time registration (consent) and authentication, the user can authenticate himself from anywhere and anytime he or she is asked to.

The government claims the success rate in authentication of pensioners is about 93 per cent. Of the 13,759 applicants, the system could successfully authenticate 12,763 pensioners. “The system would learn over a period of time and the success rate could be improved as it learns. The success rate could be as high as 96-98 per cent,” Venkateshwar Rao, who is also Commissioner (Electronic Service Delivery), said.

He was showcasing the solution at a seminar on Artificial Intelligence and Blockchain for Efficient Public Service Delivery.

The AI, ML and deep learning solutions would quickly check the details submitted by the user with the information piled up in public data systems.

He said the use cases could be many. Deployment of this method in authenticating the voters is one.

Source: The Hindu Business Line

Originally Posted at: How Telangana government authenticates beneficiaries using AI, ML, Big Data and Deep Learning tools by administrator

How Do You Measure Delight?

In an earlier article, we reviewed five competing models of delight.

The models differed in their details, but most shared the general idea that delight is composed of an unexpected positive experience. Or, for the most part, delight is a pleasant surprise.

However, there is disagreement on whether you actually need surprise to be delighted. And if you don’t need surprise, then delight is really just an extreme form of satisfaction.

With the disagreement on how to define delight, there consequently is no agreement on exactly how to measure delight. But that shouldn’t be too much of a surprise. We encountered the same issue with satisfaction measures. Despite satisfaction being a seemingly well understood (and older) construct, we still found five common ways satisfaction is measured in the literature.

So even if researchers coalesce around a common definition of delight, don’t expect there to be a universally accepted single delight measure.

While academics love to argue about the “right” model, pragmatically, researchers want to know how to measure (and improve) things.

In this article we’ll review many ways researchers have attempted to measure delight in the published literature. In an upcoming article, we’ll more closely examine which ones may be best at capturing delight and whether it’s even worth measuring at all.

The three most common ways of measuring delight use single items, two+ items, and a mix of feelings, but we’ll review some less common measurement methods as well. You’ll see that there are more ways of measuring delight than there are models of delight!

1. Single-Item Measures

One common approach to measuring delight is to simply ask participants how delighted they were with an experience.

One of the first measures was used by Westbrook in 1980. He used a single seven-point scale (shown below), anchored by Delighted and Terrible, that he found was correlated with intention.

In his study, participants were asked how they felt about an experience. The scale was later used by Oliver et al. (1997) in their foundational paper on delight (in addition to other scales discussed below).

More recently, JD Power described a Net Delighted (ND) score, another single-item scale. Participants were asked to reflect on a service experience on a ten-point satisfaction scale, with 10 being “outstanding” and 1 to 5 being “displeased” (the scale anchors were not provided). The “Net” score was derived by subtracted the displeased from the outstanding (10 minus the 1 to 5s).

Using a single item with the resulting high scores does seem very similar to an extreme form of satisfaction. And if delight can be considered as a higher level of satisfaction without surprise, then it’s likely these single items would be adequate. However, not all researchers agree that a single item is adequate to capture delight.

2. Two+ Scales

Most of the models of delight we reviewed describe two aspects of delight: surprise and joy/pleasantness.

Consequently, an alternative (and arguably more popular way) to measure delight is to ask at least two questions that explore the multiple underlying emotions.

In the Oliver (1997) study, they also used a seven-point satisfaction scale and seven-point performance scale (as shown below). They then computed delight as a top-box from both scales. These same scales (shown below) were also used in a later study by Vanhamme (2003).


In another example, Loureiro (2010) measured delight by having respondents (rural tourists) rate their agreement to two items on a five-point scale (1 = strongly disagree, 5 = strongly agree). They used the mean, rather than top-box scores.

Ngobo (1999) considered delight as equivalent to 100% satisfaction and used responses to the following four seven-point agreement/Likert items (the anchors also weren’t fully described in his study):

3. Adjective Lists

Another approach using multiple items found in the literature is having participants respond to several adjectives that describe the overlapping emotions associated with delight. This is something similar to the Microsoft Reaction Cards. For example, Oliver et al. (1997) used 13 adjectives (adapted from Larsen and Diener, 1992), including the three below, and had participants rate how frequently they felt these emotions.

In a study by Ball and Barnes (2017), they used four adjectives with the same frequency scale from Never to Always (about a Bruce Springsteen rock concert):

They averaged across the item responses to get their measure of delight.

Website Delight

In one of the rare cases where delight was measured using adjective lists outside the hospitality industry, Finn (2005) had participants rate how frequently they felt the following 13 emotions taken from Oliver (1997), also from Never to Always, on a website experience (e.g., Amazon, eBay, Ticketmaster).

While it would seem that just asking about delight would be adequate, Finn found that the “delight” adjective (along with happy and excited) cross-loaded on two factors that made it more problematic to statistically model. Cross-loading may suggest that the term delight isn’t a well-formed single concept in the minds of participants; this is something future research can examine more.

Diary Study Delight

Most of the studies described in the literature rely on surveys often collected well after an event or occurrence. One concern with this approach is that too much time has passed since the event (a non-contemporaneous account) for people to accurately reflect on their emotional reaction.

In an interesting multi-part study by Vanhamme (2003), she had her participants fill out a diary (a technique popular with UX researchers). Participants wrote down their purchases over seven nights and whether it was consumed (consumption/purchase) and whether they were surprised.

Vanhamme then used a variation of the Differential Emotional Scale (DES) to have participants reflect on their experience. The DES is described in Izard (1977) and has multiple items for seven constructs, including both positive (Surprise and Enjoyment) and negative (Anger, Fear, Disgust, Contempt, and Disdain) attitudes.

The six adjectives associated with two of the constructs (surprise and enjoyment) were used as the measure of delight in her study. The adjectives are anchored from 1 = Not at all to 5 = A lot.

4. Kano Questions

The Kano can be seen as a special case of the two+ item format. It’s familiar to UX researchers and has its own (somewhat peculiar) way to get at delight. Whereas a lot of the literature on delight is measured in the context of a service experience (amusement park, concert, food consumption, hotel stays), the Kano is suited particularly well for feature identification.

For the Kano, participants are asked to answer two questions. The first is called the functional question: What are your feelings when the feature is included? The second is called the dysfunctional question: What are your feelings when this feature is NOT included?

If a respondent to the functional question picks “I like it that way” and responds to the dysfunctional question with “I can live with it that way,” the feature is a delighter.

We’ve found that a good way to administer the Kano and minimize potential participant fatigue is to use a card-sort type option such as the one below from our MUIQ platform. We also provide an example of how to categorize features before presenting these options.

5. Text Analysis

All the prior examples of measuring delight rely on a closed-ended type of rating scale. A study by Magnini et al. (2011) used open-ended responses instead. They analyzed the content of 743 travel blogs for phrases such as “pleasant surprise,” “delightful surprise,” or “excellent surprise.”

It’s unclear how effective this approach is as it hasn’t been written about widely in the literature (that I could find). It would be difficult to gauge reliability, for example. However, if an organization has access to product or service reviews (even public facing ones), this may be a quick way of both gauging delight and understanding the context in which the delight was experienced. For example, I did a search for “pleasant surprise” on the reviews of Amazon’s Echo product and found 5 responses (out of over 5,000 reviews!) exemplifying this definition of delight:

6. Objective Measures of Delight

The previous examples of measuring delight relied on self-reports, either in surveys, contemporaneous diary studies, or in service or product reviews. There are at least three potential problems with self-reported data:

Rationalizing: People may be just rationalizing their emotions with their actions.

Social Acceptance: Participants may not want to share their emotions with strangers or in surveys (especially if it’s associated with their identity).

Inaccessibility: Participants may not always be able to identify their emotional state (because they lack abilities of introspection and retrospection).

While delight can be thought of as a subjective experience (not all people consider the same things delightful), there have been attempts to measure delight objectively, possibly avoiding the potential problems with self-reported data, including:

  1. GSR: Galvanic skin response is a measure of essentially how much people sweat (similar to a lie detector).
  2. EMG: Muscle movements are measured using an electromyogram (EMG).
  3. Facial expressions: Participant facial expressions are videotaped and coded independently by three judges, by measuring eyebrow raising, eye widening, mouth opening. (See Reisenzein, 2000.)

However, the study by Reisenzein found that GSR and EMG measures were both difficult to collect (people needed to hold still) and didn’t correlate with other subjective measures. In his study, he had better luck with coding facial expressions and correlating those with other subjective measures (and avoided hooking participants up to all that equipment, reminiscent of something from A Clockwork Orange!).

Consequently, Vanhamme’s 2003 paper (the same one we cited earlier) used facial expressions to gauge surprise. In her third study, participants consumed strawberry yogurt with a foldable spoon (which was meant to elicit surprise) or without one. The facial expressions correlated modestly with other subjective measures. If these can be collected automatically (with software), these may be one way to gauge surprise or joy, although it’s likely this would still need to be coupled with a subjective measure to gauge delight.

Summary and Discussion

A review of the literature on how to measure delight revealed:

There isn’t a single measure of delight. If you hear people talking about measuring delight, ask which model they’re using and certainly which measures they’re using. Our review of the literature uncovered at least ten different scales and multiple methods of collection. In most studies, multiple measures were used (often a combination of 2+ rating scales and adjective frequency lists).

Multi-item subjective measures dominate, but single items may be adequate. Our review of the published literature revealed that most studies use multi-item measures of delight, but if surprise isn’t necessary to have delight (an open question) then it’s likely single item measures (e.g., Terrible to Delightful) may be adequate. It also may mean that delight may not be distinct from some form of extreme satisfaction. Whether any delight measures offer better predictive validity (of retention or usage) is a subject for future research (and a future article).

There are multiple methods of data collection. Surveys of one point in time tend to be the dominant method for collecting delight measures. Other methods include more contemporaneous diary studies, analyzing text in product reviews, and observing facial expressions.

Some objective measures may work. Some researchers have found that coding facial expressions shows promise for measuring surprise and/or joy. Attempts at using Galvanic Skin Response and muscle movements (EMG) have been impractical and they didn’t corroborate with other subjective measures. It’s unclear though whether facial expressions alone can measure delight (or just surprise or joy) and whether these really are superior to self-reports of data (they certainly require more involvement).

Correlation with objective measures increase credibility of subjective measures. Interestingly, the finding that facial expression coding correlated at least modestly in some studies (e.g., r ~ .3) with some subjective measures of delight provides additional validation for subjective measures. Coding facial expressions may be impractical (too costly or time consuming), but their correlation suggests researchers can place more trust in these easy-to-collect subjective measures. How delightful.

(function() {
if (!window.mc4wp) {
window.mc4wp = {
listeners: [],
forms : {
on: function (event, callback) {
window.mc4wp.listeners.push({
event : event,
callback: callback
});
}
}
}
}
})();

Sign-up to receive weekly updates.

Originally Posted at: How Do You Measure Delight? by analyticsweek

Dec 05, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Statistically Significant  Source

[ AnalyticsWeek BYTES]

>> Fortune 100 CEOs And Their Path To Success by v1shal

>> The Right Way to Migrate Real-Time Data to the Cloud by jelaniharper

>> Best Practices For Building Talent In Analytics by analyticsweekpick

Wanna write? Click Here

[ FEATURED COURSE]

Master Statistics with R

image

In this Specialization, you will learn to analyze and visualize data in R and created reproducible data analysis reports, demonstrate a conceptual understanding of the unified nature of statistical inference, perform fre… more

[ FEATURED READ]

Antifragile: Things That Gain from Disorder

image

Antifragile is a standalone book in Nassim Nicholas Taleb’s landmark Incerto series, an investigation of opacity, luck, uncertainty, probability, human error, risk, and decision-making in a world we don’t understand. The… more

[ TIPS & TRICKS OF THE WEEK]

Strong business case could save your project
Like anything in corporate culture, the project is oftentimes about the business, not the technology. With data analysis, the same type of thinking goes. It’s not always about the technicality but about the business implications. Data science project success criteria should include project management success criteria as well. This will ensure smooth adoption, easy buy-ins, room for wins and co-operating stakeholders. So, a good data scientist should also possess some qualities of a good project manager.

[ DATA SCIENCE Q&A]

Q:What are the drawbacks of linear model? Are you familiar with alternatives (Lasso, ridge regression)?
A: * Assumption of linearity of the errors
* Can’t be used for count outcomes, binary outcomes
* Can’t vary model flexibility: overfitting problems
* Alternatives: see question 4 about regularization

Source

[ VIDEO OF THE WEEK]

Decision-Making: The Last Mile of Analytics and Visualization

 Decision-Making: The Last Mile of Analytics and Visualization

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

The goal is to turn data into information, and information into insight. – Carly Fiorina

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Nathaniel Lin (@analytics123), @NFPA

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Nathaniel Lin (@analytics123), @NFPA

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Every person in the US tweeting three tweets per minute for 26,976 years.

Sourced from: Analytics.CLUB #WEB Newsletter

For the airline industry, big data is cleared for take-off

Tracking bags, personalizing offers, boosting loyalty, and optimizing operations are all goals of a renewed data-driven approach by major airlines.

When a customer checks into a flight with United Airlines UAL -0.87% , there is typically an array of potential add-on offers to navigate through: flight upgrades, access to the airline’s United Club, and more.

Under United’s old “collect and analyze” approach to data, the airline would use information about customers’ choices about those items, in aggregated fashion to “see what the most successful products were, and market with those [insights] in mind,” said Scott Wilson, the company’s vice president of e-commerce and merchandising.

That approach has changed. As of the beginning of this year, “collect, detect, act” is United’s new data-focused mantra, and it’s changing the way the airline serves its customers.

“Now we look at who the customer is and his or her propensity to buy certain products,” Wilson explained. More than 150 variables about that customer—prior purchases and previous destinations among them—are now assessed in real time to determine an individual’s likely actions, rather than an aggregated group of customers.

The result, delivered in about 200 milliseconds later, is a dynamically generated offer tailored to the individual. Its terms, on-screen layout, copy, and other elements will vary based on an individual’s collected data. For United, the refined approach led to an increase in year-over-year ancillary revenue of more than 15 percent, he said.

‘Airlines evolved big data’

Welcome to the big data era in the airline industry, which in many ways was one of its earliest participants.

“Airlines are awash in data, much of it unstructured,” said Bob Mann, an industry analyst with R.W. Mann & Co. But only recently have airlines been able to use big-data techniques “to solve, among other objectives, how to recognize and enhance customer value, and how to cultivate high-value customers,” he said.

“Airlines have always been very good at collecting data, but they haven’t always been good at using it,” United’s Wilson said. Now that the costs of storing and processing data have dropped—even as airlines collect more and more of it—it’s becoming easier for a company to act on it. At United, roughly a terabyte of customer data is floating around at any given time within its systems. “We don’t keep it all,” Wilson said. “We have to be selective about what we grab.” For the data that is selected, a real-time decision engine does the crunching to turn it into something useful.

It starts at the baggage carousel

One area in which the effects of big data technology are visible is in the handling of customers’ luggage. “We have over a number of years invested millions of dollars in baggage tracking,” said Paul Skrbec, a spokesman withDelta Air Lines. “That was one of those core, behind-the-scenes services for our customers.”

Millions of bags are checked each year with Delta DAL -1.46% —a total of 130 million are projected for 2014, Skrbec said—and “every customer has had the experience of boarding a plane after checking their bag and wondering if it was there.”

Through the use of hand-held baggage scanners used at passenger check-in, “we’ve had all this tracking data available,” Skrbec said. But “one of the things we realized about two years ago is that customers would benefit from having that information.”

Which is why Delta was the first major airline to launch an application allowing customers to track their bags from their mobile devices, he said. Spanning the iOS, Google Android, BlackBerry and Windows Phone mobile operating systems, the free app has been downloaded more than 11 million times.

In search of new revenue streams

It’s a similar story at Southwest Airlines  LUV -1.43% , which is using big data to determine which new customer services to implement.

“Southwest uses aggregated, anonymous customer data to promote products, services, and featured offers to customers on multiple channels, devices, and websites including Southwest.com,” said Dan Landson, a company spokesman. “By observing and looking into customer behaviors and actions online, we are better suited to offer our travelers the best rates and experiences possible. We also use this data to support the evolving relationships with our customers.”

For example, “we look at the city pairs that are being searched to help us determine what type of service we should have on a specific route,” Landson said.

The payoff? “Our customer and loyalty segments grow year-over-year,” Landson said. “We believe that intelligent, data-based targeting has a lot to do with that growth.”

‘$1 million per week’

The benefits of a data-focused approach may be easy to understand, but execution is another matter entirely. For most airlines, the first problem lies in “bringing together all sorts of disparate silos of passenger information—booking information from transaction systems, web and mobile behavior (including searches, visits, abandoned carts), email data, customer service info, etc.—to create a single, consolidated view of the customer,” said Allyson Pelletier, vice president of marketing with Boxever, which offers a marketing platform focused on putting big data to work for the travel industry.

“Armed with this information, and the resulting insights, they can then take specific action that helps them convert more visitors on-site, secure more revenue, or increase loyalty across any channel,” Pelletier said.

At Norwegian airline Wideroe, for example, a single customer view “enables agents in the call center to understand the full history of the customer—not just the customer service history, but also their recent visits to the website or promotional emails they’ve opened,” she explained.  “After they solve the customer service issue at hand, they’re in a powerful position to then recommend the most appropriate ancillary service—driving add-on revenue—or offer a complimentary upgrade, thereby driving loyalty.”

Insights garnered from a single customer view can also drive personalized messaging into various communications channels, and email is a popular starting place, Pelletier noted.

“One of our largest clients in Europe uses Boxever to understand abandoned carts and then trigger personalized emails to the abandoners,” she said. “They reported back subsequent bookings of $1 million per week from these communications.”

Boxever also cites a 21 percent reduction in customer-acquisition costs on paid media “by understanding who the customer was, where they came from and whether or not they were already a customer,” said Dave O’Flanagan, the company’s chief executive. “This way they could start to move those customers away from expensive acquisition channels to retention channels, like email, which is much cheaper.” There is also potential for a 17 percent uplift in conversion on ancillary cross-sells, such as adding hotel or car to a booking, he added.

‘Few companies are really leveraging big data’

Exciting though those benefits may be, there’s an even bigger pool of potential payoffs remaining untouched. “Surprisingly few [airline] companies are really leveraging big data today,” O’Flanagan said.

Indeed, “I’ve not seen a single major airline with an integrated ‘big data’ business solution, nor an airline with a plan to integrate such a program,” said Richard Eastman, founder and president of The Eastman Group, which builds travel software.

That depends on how one defines big data, however. “The airlines will tell you they ‘have it all’ without really knowing or understanding what ‘big data’ really is,” Eastman said. “Airline managements remain so focused on selling seats with their existing inventory systems that they have ignored buyer information needs as well as the tools that would enable them to reach out to buyers and travelers to serve those needs—let alone, reach buyers at decision-making moments.”

Marketing, flight operations and crew operations are all areas of rich opportunity, O’Flanagan said.

“I think there’s still a huge unmet need in the marketing and customer experience area,” he said. “Companies like Google are trying to be the ultimate assistant with technologies like Google Now. I think there’s a huge opportunity for airlines to create a helpful travel assistant that knows what I need before I do by combining data with mobile—helping people through airports, in-destination, right throughout the whole travel journey.

“Imagine a travel application that knows where I am, that I’m traveling with my family and that the weather is bad on our beach holiday. It could start to offer alternative itineraries close by that are family-friendly and not weather-dependent. These are truly valuable things an airline could do for me if they could use big data effectively and join the dots between me, my travel experience and environmental factors affecting that.

Originally posted via “For the airline industry, big data is cleared for take-off”

Source: For the airline industry, big data is cleared for take-off by analyticsweekpick