Feb 08, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Statistically Significant  Source

[ AnalyticsWeek BYTES]

>> The Big List: 80 Of The Hottest SEO, Social Media & Digital Analytics Tools For Marketers by analyticsweekpick

>> Big Data – What it Really Means for VoC and Customer Experience Professionals by bobehayes

>> Your Relative Performance: A Better Predictor of Employee Turnover by bobehayes

Wanna write? Click Here

[ NEWS BYTES]

>>
 An Inconvenient Truth: 93% of Customer Experience Initiatives are Failing… – Customer Think Under  Customer Experience

>>
 Logility acquires Halo Business Intelligence to expand advanced … – Logistics Management Under  Prescriptive Analytics

>>
 Apache Hadoop 3.0 goes GA, adds hooks for cloud and GPUs – TechTarget Under  Hadoop

More NEWS ? Click Here

[ FEATURED COURSE]

Pattern Discovery in Data Mining

image

Learn the general concepts of data mining along with basic methodologies and applications. Then dive into one subfield in data mining: pattern discovery. Learn in-depth concepts, methods, and applications of pattern disc… more

[ FEATURED READ]

The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t

image

People love statistics. Statistics, however, do not always love them back. The Signal and the Noise, Nate Silver’s brilliant and elegant tour of the modern science-slash-art of forecasting, shows what happens when Big Da… more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:What is better: good data or good models? And how do you define ‘good”? Is there a universal good model? Are there any models that are definitely not so good?
A: * Good data is definitely more important than good models
* If quality of the data wasn’t of importance, organizations wouldn’t spend so much time cleaning and preprocessing it!
* Even for scientific purpose: good data (reflected by the design of experiments) is very important

How do you define good?
– good data: data relevant regarding the project/task to be handled
– good model: model relevant regarding the project/task
– good model: a model that generalizes on external data sets

Is there a universal good model?
– No, otherwise there wouldn’t be the overfitting problem!
– Algorithm can be universal but not the model
– Model built on a specific data set in a specific organization could be ineffective in other data set of the same organization
– Models have to be updated on a somewhat regular basis

Are there any models that are definitely not so good?
– ‘all models are wrong but some are useful” George E.P. Box
– It depends on what you want: predictive models or explanatory power
– If both are bad: bad model

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek: Big Data at Work: Paul Sonderegger

 @AnalyticsWeek: Big Data at Work: Paul Sonderegger

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It is a capital mistake to theorize before one has data. Insensibly, one begins to twist the facts to suit theories, instead of theories to

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Nathaniel Lin (@analytics123), @NFPA

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Nathaniel Lin (@analytics123), @NFPA

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

According to estimates, the volume of business data worldwide, across all companies, doubles every 1.2 years.

Sourced from: Analytics.CLUB #WEB Newsletter

Big data analytics are the future of cyber security

LAS VEGAS: Developments in hacking culture and enterprise technology mean that big data-led intelligence defences are the future of the security industry, according to EMC.

David Goulden, CEO of information infrastructure at EMC, made the claim during a press session at EMC World attended by V3.

“The security industry is changing dramatically. If you look to the past, firewall and antivirus technologies came up as the main solution in the second platform,” he said.

“But at this stage in IT the apps were generally used within the enterprise so it was manageable. But this is no longer the case in the third platform.”

Goulden explained that in today’s threat landscape there is no way that companies can keep determined adversaries out of their systems.

He added that to overcome the challenge businesses will have to adopt big data analytics solutions that identify and combat malicious activity on the network.

“The two big challenges in security are big data challenges and based on intelligence. The first is how to securely authenticate who is coming into the network,” he said.

“The second big challenge in security is the identification of anomalies occurring in your network in real time.”

EMC is one of many firms to cite analytics as the future of cyber security.

UK firm Darktrace said during an interview with V3 that businesses’ reliance on perimeter-based defences is a key reason why hackers can spend months at a time in companies’ systems undetected.

A lack of employee awareness regarding security best practice is another common problem at many businesses. Verizon said in March that poor security awareness results in the success of one in four phishing scams.

Taking a swipe at competitors, Goulden boasted that the sweep of data breaches resulting from businesses security practices has led to a rise in the number of companies migrating to RSA analytics solutions.

“We’re a generation ahead of people at this [big data analytics-based security]. Every public breach that has occurred in recent memory has been at companies using our competitors and many have since moved to use RSA technologies,” he said.

He added that the firm has monopolised on this trend, claiming: “75 percent of RSA’s work is aimed at improving security in the future.”

Despite Goulden’s bold claims RSA reported just $4m year-on-year growth in revenue in itsQ1 2015 financials.

RSA took in a total $248m in revenue, marking an overall $165m gross profit during the first three months of 2015.

Goulden’s comments follow the launch of EMC’s Data Domain DD9500 solution and updates for its ProtectPoint and CloudBoost services.

Originally posted via “Big data analytics are the future of cyber security”

Originally Posted at: Big data analytics are the future of cyber security by analyticsweekpick

March 6, 2017 Health and Biotech analytics news roundup

Here’s the latest in health and biotech analytics:

Mathematical Analysis Reveals Prognostic Signature for Prostate Cancer: University of East Anglia researchers used an unsupervised technique to categorize cancers based on gene expression levels. Their method was better able than current supervised methods to identify patients with more harmful variants of the disease.

Assisting Pathologists in Detecting Cancer with Deep Learning: Scientists at Google have trained deep learning models to detect tumors in images of tissue samples. These models beat pathologists’ diagnoses by one metric.

Patient expectations for health data sharing exceed reality, study says: The Humana study shows that, among other beliefs, most patients think doctors share more information than they actually do. They also expect information from digital devices will be beneficial.

NHS accused of covering up huge data loss that put thousands at risk: The UK’s national health service failed to deliver half a million medically relevant documents between 2011 and 2016. They had previously briefed Parliament about the failure, but not the scale of it.

Entire operating system written into DNA at 215 Pbytes/gram: Yaniv Erlich and Dina Zielinski (New York Genome Center) used a “fountain code” to translate a 2.1 MB archive into DNA. They were able to retrieve the data by sequencing the resulting fragments, a process that was robust to mutations and loss of sequences.

Source by pstein

Feb 01, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Extrapolating  Source

[ AnalyticsWeek BYTES]

>> Focus on success, not perfection: Look at this data science algorithm for inspiration by analyticsweekpick

>> Which Machine Learning to use? A #cheatsheet by v1shal

>> #FutureOfData with @CharlieDataMine, @Oracle discussing running analytics in an enterprise by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Relx Group’s trading update is reassuringly relaxed – The Times Under  Business Analytics

>>
 Global Big Data Security Market 2017- IBM, Microsoft, Oracle, Amazon Web Services – News of Columnist: Research News By Market.Biz Under  Big Data Security

>>
 Solving the ‘Last Mile’ Problem in Data Science – Datanami Under  Data Scientist

More NEWS ? Click Here

[ FEATURED COURSE]

Tackle Real Data Challenges

image

Learn scalable data management, evaluate big data technologies, and design effective visualizations…. more

[ FEATURED READ]

Data Science from Scratch: First Principles with Python

image

Data science libraries, frameworks, modules, and toolkits are great for doing data science, but they’re also a good way to dive into the discipline without actually understanding data science. In this book, you’ll learn … more

[ TIPS & TRICKS OF THE WEEK]

Data aids, not replace judgement
Data is a tool and means to help build a consensus to facilitate human decision-making but not replace it. Analysis converts data into information, information via context leads to insight. Insights lead to decision making which ultimately leads to outcomes that brings value. So, data is just the start, context and intuition plays a role.

[ DATA SCIENCE Q&A]

Q:What are feature vectors?
A: * n-dimensional vector of numerical features that represent some object
* term occurrences frequencies, pixels of an image etc.
* Feature space: vector space associated with these vectors

Source

[ VIDEO OF THE WEEK]

Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

 Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

You can use all the quantitative data you can get, but you still have to distrust it and use your own intelligence and judgment. – Alvin Tof

[ PODCAST OF THE WEEK]

@SidProbstein / @AIFoundry on Leading #DataDriven Technology Transformation #FutureOfData #Podcast

 @SidProbstein / @AIFoundry on Leading #DataDriven Technology Transformation #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

39 percent of marketers say that their data is collected ‘too infrequently or not real-time enough.’

Sourced from: Analytics.CLUB #WEB Newsletter

Three Upcoming Talks on Big Data and Customer Experience Management

I have recently written on Big Data’s role in Customer Experience Management (CEM) and how companies can extract great insight from their business data when different types of business data are integrated with customer feedback data. I have been invited to share my thoughts on Big Data and Customer Experience Management at three upcoming conferences in May and June (see conference details below).

Big Data and CEM

Big Data refers to the idea that companies can extract value from collecting, processing and analyzing vast quantities of data. Businesses who can get a better handle on these data will be more likely to outperform their competitors who do not. In my talks, I will explore how businesses can apply Big Data principles to their existing business data, including customer feedback, to gain deeper customer insight. Incorporating customer feedback metrics into a Big Data strategy will help companies:

  1. answer bigger questions about their customers
  2. spread a customer-centric culture in the company through increased collaboration among different departments
  3. use objective measures of customer loyalty to build better predictive models

Upcoming Big Data Talks

These talks will be my first public appearances as the Chief Customer Officer of TCELab. Here are the presentation titles and corresponding conference information.

  • Using your Big Data to Improve Customer Loyalty (May 15): At VOCFusion - May 14-17, Las Vegas, NV.
  • Big Data has Big Implications for Customer Experience Management (May 18): At Score Conference - May 16-18, Boston, MA.
  • Integrating Customer Experience Metrics Into Your Big Data Plan (June 25): At Redes Sociales y sCRM – June 25-26, Bogota, Colombia.

I am looking forward to participating in each of these conferences. If you are attending any of these events, I hope to see you there!

Source: Three Upcoming Talks on Big Data and Customer Experience Management

Jan 25, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data Storage  Source

[ AnalyticsWeek BYTES]

>> 100 Greatest Data Quotes by v1shal

>> Is this video software company set to take hockey analytics to the next level? by analyticsweekpick

>> Hacking the Data Science by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Robert J. Rengel, 89, St. Cloud – WJON News Under  Cloud

>>
 Big data best practice means a best-of-breed approach – ComputerWeekly.com (blog) Under  Big Data

>>
 CARICOM Statistics Day Observation Today – Bernews Under  Statistics

More NEWS ? Click Here

[ FEATURED COURSE]

The Analytics Edge

image

This is an Archived Course
EdX keeps courses open for enrollment after they end to allow learners to explore content and continue learning. All features and materials may not be available, and course content will not be… more

[ FEATURED READ]

The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t

image

People love statistics. Statistics, however, do not always love them back. The Signal and the Noise, Nate Silver’s brilliant and elegant tour of the modern science-slash-art of forecasting, shows what happens when Big Da… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:What are confounding variables?
A: * Extraneous variable in a statistical model that correlates directly or inversely with both the dependent and the independent variable
* A spurious relationship is a perceived relationship between an independent variable and a dependent variable that has been estimated incorrectly
* The estimate fails to account for the confounding factor

Source

[ VIDEO OF THE WEEK]

Using Topological Data Analysis on your BigData

 Using Topological Data Analysis on your BigData

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It is a capital mistake to theorize before one has data. Insensibly, one begins to twist the facts to suit theories, instead of theories to

[ PODCAST OF THE WEEK]

@BrianHaugli @The_Hanover ?on Building a #Leadership #Security #Mindset #FutureOfData #Podcast

 @BrianHaugli @The_Hanover ?on Building a #Leadership #Security #Mindset #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues.

Sourced from: Analytics.CLUB #WEB Newsletter

Gleanster – Actionable Insights at a Glance

Gleanster: Actionable insights at a glanceI am happy to announce that I have joined Gleanster’s Thought Leader group as a contributing analyst. Gleanster is a market research and advisory services firm that benchmarks best practices in technology-enabled business initiatives, delivering actionable insights that allow companies to make smart business decisions and match their needs with vendor solutions.

In my role at Gleanster, I will be involved in providing insight into the Enterprise Feedback Management (EFM) and Customer Experience Management (CEM) space. Building on Gleanster’s 2010 Customer Feedback Management report as well as my own research on best practices in customer feedback programs (See Beyond the Ultimate Question for complete results of my research), I will be directing Gleanster’s upcoming benchmark study on Customer Experience Management. In this study, we will identify specific components of CEM that are essential in helping companies deliver a great customer experience that increases customer loyalty.

“We are excited to have Dr. Hayes as part of our distinguished thought leader group. Dr. Hayes brings over 20 years of experience to bear on important issues in customer experience management and enterprise feedback management. Specifically, his prior research on the measurement and meaning of customer loyalty and best practices in customer feedback programs has helped advance the field tremendously. His scientific research is highly regarded by his industry peers, and we are confident that Dr. Hayes’ continuing contributions to the field will bring great value to the Gleanster community.”

Jeff Zabin, CEO
Gleanster

As a proud member of the 1% for the Planet alliance, Gleanster is committed to donating at least 1% of their annual sales revenue to nonprofit organizations focused on environmental sustainability.

Originally Posted at: Gleanster – Actionable Insights at a Glance by bobehayes

What’s a CFO’s biggest fear, and how can machine learning help?

cfo_scared

It’s 5:05pm EST. Bob, CFO of ABC Inc is about to get on an earnings call after just reporting a 20% miss on earnings due to slower revenue growth than forecasted. Company ABC’s stock price is plummeting, down 25% in extended hour trading. The board is furious and investors demand answers on the discrepancies.

Inaccurate revenue forecast remains one of the biggest risks for CFOs. In a recent study, more than 50% of companies feel their pipeline forecast is only about 50% accurate. Projecting a $30M revenue target and coming in short $6M can leave investors and employees frustrated and feeling misguided on the growth trajectory of the company.

In the past 10 years, supply chain has become much more complex with omni-channel distribution and the increasing number of indirect participants that can influence product demand. Advertising and promotions can create an uplift in demand that spikes sales by 20% or more. In addition, different types of customers have different purchasing behaviors. These behaviors are driven by a myriad of underlying indicators and should be modeled individually. Yet, Financial Planning and Analysis (FP&A) has not changed fundamentally despite the changing landscape in the way companies do business. The process is still largely manual and dependent on time-series estimation techniques dating back to the 1980s.

Machine learning is a new technology that uses algorithms to learn from the data and guides us in making more informed decisions. Leveraging the power of machines allows us to consider more scenarios and combine the effects of thousands of indicators to improve forecast accuracy. For revenue forecasting, machine learning excels in the following 3 areas:

1. Trend discovery from unlimited amounts of data

With the advances in big data technologies, computers can cost-effectively crunch through data of all types and sizes. Unlike humans, algorithms can simulate numerous scenarios and recognize patterns that keep re-emerging in the data. It is also not limited to structured data and can examine unstructured data such as emails and logs to extract meaningful indicators.

2. Granularity of forecast

Instead of looking at product line level aggregate sales values, machine learning algorithms can detect patterns at SKU, purchase order and invoice levels to discover interesting relationships and dependencies. For example, algorithms may find that the demand of one product (iPhone 6) is a leading indicator of demand for another product (iPhone 6 accessories).

3. Adaptive and Dynamic

Machines can also automatically adapt and re-run forecasting scenarios to adjust to changing market conditions and consumer demands.

Companies such as Flowcast are leading the charge in introducing machine learning techniques to the finance department of organizations. This will arm CFOs with much greater confidence in their analyses and projections.

 

Source: What’s a CFO’s biggest fear, and how can machine learning help?

Jan 18, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Convincing  Source

[ AnalyticsWeek BYTES]

>> Looking for a career? ‘Big data’ analysts in high demand by analyticsweekpick

>> Why Using the ‘Cloud’ Can Undermine Data Protections by analyticsweekpick

>> When shouldn’t you rely on data analytics – The danger of trusting polls by checcaaird

Wanna write? Click Here

[ NEWS BYTES]

>>
 Lecturer / Senior Lecturer In Statistics – The Conversation AU Under  Statistics

>>
 Artificial Intelligence Use by Healthcare Providers Lags. But Not for Long. – HealthLeaders Media Under  Prescriptive Analytics

>>
 Scoreboard/Statistics/Odds – Akron Beacon Journal Under  Statistics

More NEWS ? Click Here

[ FEATURED COURSE]

Hadoop Starter Kit

image

Hadoop learning made easy and fun. Learn HDFS, MapReduce and introduction to Pig and Hive with FREE cluster access…. more

[ FEATURED READ]

Hypothesis Testing: A Visual Introduction To Statistical Significance

image

Statistical significance is a way of determining if an outcome occurred by random chance, or did something cause that outcome to be different than the expected baseline. Statistical significance calculations find their … more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:Do we always need the intercept term in a regression model?
A: * It guarantees that the residuals have a zero mean
* It guarantees the least squares slopes estimates are unbiased
* the regression line floats up and down, by adjusting the constant, to a point where the mean of the residuals is zero

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Torture the data, and it will confess to anything. – Ronald Coase

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with  John Young, @Epsilonmktg

 #BigData @AnalyticsWeek #FutureOfData #Podcast with John Young, @Epsilonmktg

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The Hadoop (open source software for distributed computing) market is forecast to grow at a compound annual growth rate 58% surpassing $1 billion by 2020.

Sourced from: Analytics.CLUB #WEB Newsletter