A beginners guide to data analytics

This is Part I of our three-part June 2015 print cover story on healthcare analytics. Part I focuses on the first steps of launching an analytics program. Part II focuses on intermediate strategies, and Part III focuses on the advanced stages of an analytics use.

This first part may sting a bit: To those healthcare organizations in the beginning stages of rolling out a data analytics program, chances are you’re going to do it completely and utterly wrong.

At least that’s according to Eugene Kolker, chief data scientist at Seattle Children’s Hospital, who has been working in data analysis for the past 25 years. When talking about doing the initial metrics part of it, “The majority of places, whether they’re small or large, they’re going to do it wrong,” he tells Healthcare IT News. And when you’re dealing with people’s lives, that’s hardly something to take lightly.

Kolker would much prefer that not to be the case, but from his experiences and what he’s seen transpire in the analytics arena across other industries, there’s some unfortunate implications for the healthcare beginners.

“What’s the worst that can happen if Amazon screws up (with analytics)?…It’s not life and death like in healthcare.”


But it doesn’t have to be this way. Careful, methodical planning can position an organization for success, he said. But there’s more than a few things you have to pay serious attention to.

First, you need to get executive buy in. Data analytics can help the organization improve performance in myriad arenas. It can save money in the changing value-based reimbursement world. Better yet, it can save lives. And, if you’re trying to meet strategic objectives, it may be a significant part of the equation there too.

As Kolker pointed out in a presentation given at the April 2015 CDO Summit in San Jose, California, data and analytics should be considered a “core service,” similar to that of finance, HR and IT.

Once you get your executive buy in, it’s on to the most important part of it all: the people. If you don’t have people who have empathy, if you don’t have a team who communicate and manage well, you can count on a failed program, said Kolker, who explained that this process took him years to finally get right. People. Process. Technology – in that order of importance.

“Usually data scientists are data scientists not because they like to work with people but because they like to work with data and computers, so it’s a very different mindset,” he said. It’s important, however, “to have those kind of people who can be compassionate,” who can do analysis without bias.

And why is that? “What’s the worst that can happen if Amazon screws up (with analytics)?” Kolker asked. “It’s not life and death like in healthcare,” where “it’s about very complex issues about very complex people. … The pressure for innovation is much much higher.”

[Part II: Clinical & business intelligence: the right stuff]

[Part III: Advanced analytics: All systems go]

When in the beginning stages of anything analytics, the aim is to start slow but not necessarily to start easy, wrote Steven Escaravage and Joachim Roski, principals at Booz Allen, in a 2014 Health Affairs piece on data analytics. Both have worked on 30 big data projects with various federal health agencies and put forth their lessons learned for those ready to take a similar path.

One of those lessons?

Make sure you get the right data that addresses the strategic healthcare problem you’re trying to measure or compare, not just the data that’s easiest to obtain.

“While this can speed up a project, the analytic results are likely to have only limited value,” they explained. “We have found that when organizations develop a ‘weighted data wish list’ and allocate their resources toward acquiring high-impact data sources as well as easy-to-acquire sources, they discover greater returns on their big data investment.”

So this may lead one to ask: What exactly is the right data? What metrics do you want? Don’t expect a clear-cut answer here, as it’s subjective by organization.

First, “you need to know the strategic goals for your business,” added Kolker. “Then you start working on them, how are you going to get data from your systems, how are you going to compare yourself outside?”

In his presentation at the CDO Summit this April, Kolker described one of Seattle Children’s data analytics projects that sought to evaluate the effectiveness of a vendor tool that predicted severe clinical deterioration, or SCD, of a child’s health versus the performance of a home-grown internal tool that had been used by the hospital since 2009.

After looking at cost, performance, development and maintenance, utility, EHR integration and algorithms, Kolker and his team found for buy versus build, using an external vendor tool was not usable for predicting SCD but that it could be tested for something else. And furthermore, the home-grown tool needed to be integrated into the EHR.

Kolker and his team have also helped develop a metric to identify medically complex patients after the hospital’s chief medical officer came to them wanting to boost outcomes for these patients. Medically complex patients typically have high readmissions and consume considerable hospital resources, and SCH wanted to improve outcomes for this group without increasing costs for the hospital.

For folks at the Nebraska Methodist Health System, utilizing a population risk management application that had a variety of metrics built in was a big help, explained Linda Burt, chief financial officer of the health system, in Healthcare IT News’ sister publication webinar this past April.

Katrina Belt

“The common ones you often hear of such as admissions per 1,000, ED visits per 1,000, high-risk high end imaging per 1,000,” she said. Using the application, the health system was able to identity that a specific cancer presented the biggest opportunity for cost alignment.

And health system CFO Katrina Belt’s advice? “We like to put a toe in the water and not do a cannon ball off the high dive,” she advised. Belt, the CFO at Baptist Health in Montgomery, Alabama, said a predictive analytics tool is sifting through various clinical and financial data to identify opportunities for improvement.

In a Healthcare Finance webinar this April, Belt said Baptist Health started by looking at its self-pay population and discovered that although its ER visits were declining, intensive care visits by patients with acute care conditions were up on upward trend.

Belt recommended starting with claims data.

“We found that with our particular analytics company, we could give them so much billing data that was complete and so much that we could glean from just the 835 and 837 file that it was a great place for us to start,” she said. Do something you can get from your billing data, Belt continued, and once you learn “to slide and dice it,” share with your physicians. “Once they see the power in it,” she said, “that’s when we started bringing in the clinical data,” such as tackling CAUTIs.

But some argue an organization shouldn’t start with an analytics platform. Rather, as Booz Allen’s Escaravage and Roski wrote, start with the problem; then go to a data scientist for help with it.

One federal health agency they worked with on an analytics project, for instance, failed to allow the data experts “free rein” to identify new patterns and insight, and instead provided generic BI reports to end users. Ultimately, the results were disappointing.

“We strongly encouraged the agency to make sure subject matter experts could have direct access to the data to develop their own queries and analytics,” Escaravage and Roski continued.  Overall, when in the beginning phases of any analytics project, one thing to keep in mind, as Kolker reinforced: “Don’t do it yourself.” If you do, “you’re going to fail,” he said. Instead, “do your homework; talk to people who did it.”

To read the complete article on Healthcar IT News, click here.


3 Vendors Lead the Wave for Big Data Predictive Analytics

Enterprises have lots of solid choices for big data predictive analytics.

That’s the key takeaway from Forrester’s just released Wave for Big Data Predictive Analytics Solutions for the second quarter of 2015.

That being said, the products Forrester analysts Mike Gualtieri and Rowan Curran evaluated are quite different.

Data scientists are more likely to appreciated some, while business analysts will like others. Some were built for the cloud, others weren’t.

They all can be used to prepare data sets, develop models using both statistical and machine learning algorithms, deploy and manage predictive analytics lifecycles, and tools for data scientists, business analysts and application developers.

General Purpose

It’s important to note that there are plenty of strong predictive analytics solution providers that weren’t included in this Wave, and it’s not because their offerings aren’t any good.

Instead Forrester focused specifically on “general purpose” solutions rather than those geared toward more specific purposes like customer analytics, cross-selling, smarter logistics, e-commerce and so on. BloomReach, Qubit, Certona, Apigee and FusionOps, among others, are examples of vendors in the aforementioned categories.

The authors also noted that open source software community is driving predictive analytics into the mainstream. Developers have an abundant selection of API’s within reach that they can leverage via popular programming languages like Java, Python and Scala to prepare data and predictive models.

Not only that but, according to report, many BI platforms also offer “some predictive analytics capabilities.” Information Builders, MicroStrategy and Tibco, for example, integrate with R easily.

The “open source nature” of BI solutions like Birt, OpenText and Tibco Jaspersoft make R integration simpler.

Fractal Analytics, Opera Solutions, Teradata’s Think Big and Beyond The Art and the like also provide worthwhile solutions and were singled out as alternatives to buying software. The authors also noted that larger consulting companies like Accenture, Deloitte, Infosys and Virtuasa all have predictive analytics and/or big data practices.

In total, Forrester looked at 13 vendors: Alpine Data Labs, Alteryx, Agnoss, Dell, FICO, IBM, KNIME, Microsoft, Oracle, Predixion Software, RapidMiner, SAP and SAS.

Forrester’s selection criteria in the most general sense rates solution providers according to their Current Offering (components include: architecture, security, data, analysis, model management, usability and tooling, business applications) and Strategy (components include acquisition and pricing, ability to execute, implementation support, solution road map, and go-to-market growth rate.) Each main category carries 50 percent weight.

Leading the Wave

IBM, SAS and SAP — three tried and trusted providers — lead this Forrester Wave:.

IBM achieved perfect scores in the seven of the twelve criteria: Data, Usability and Tooling, Model Management, Ability to Execute, Implementation Support, Solution Road Map and Go-to Market Growth Rate. “With customers deriving insights from data sets with scores of thousands of features, IBM’s predictive analytics has the power to take on truly big data and emerge with critical insights,” note the report’s authors. Where does IBM fall short? Mostly in the Acquisition and Pricing category.

SAS is the granddaddy of predictive analytics and, like IBM, it achieved a perfect score many times over. It’s interesting to note that it scored highest among all vendors in Analysis. It was weighed down, however, by its strategy in areas like Go-to-Market Growth Rate and Acquisition and Pricing. This may not be as a big problem by next year, at least if Gartner was right in its most recent MQ on BI and Analytics Platforms Leaders, where it noted that SAS was aware of the drawback and was addressing the issue.

“SAP’s relentless investment in analytics pays off,” Forrester notes in its report. And as we’ve reiterated many times, the vendor’s predictive offerings include some snazzy differentiating features like analytics tools that you don’t have to be a data scientist to use, a visual tool that lets users analyze several databases at once, and for SAP Hana customers SAP’s Predictive Analytics Library (PAL) to analyze big data.

The Strong Performers

Not only does RapidMiner’s predictive analytics platform include more than 1,500 methods across all stages of the predictive analytics life cycle, but with a single click they can also be integrated into the cloud. There’s also a nifty “wisdom of the crowds” feature that Forrester singles out; it helps users sidestep mistakes made, by others, in the past and get to insights quicker. What’s the downside? Implementation support and security.

Alteryx takes the pains out of data prep, which is often the hardest and most miserable part of a data worker’s job. They also offer a tool for that helps data scientists collaborate with business users via a visual tool. Add to that an analytical apps gallery to help users share their data prep and modeling workflows with other users, and you’ve set a company up with what it needs to bring forth actionable insights. While Alteryx shines in areas like Data, Ability to Execute, and Go-to-Market Growth Rate, there’s room for improvement in Architecture and Security.

Oracle rates as a strong performer, even though it doesn’t offer a single purpose solution. Instead its Oracle SQL Developer tool includes a visual interface to allow data analysts to create analytical workflows and models, this according to Forrester. Not only that, but Oracle also takes advantage of open-source R for analysis, and has revised a number of its algorithms to take advantage of Oracle’s database architecture and Hadoop.

FICO, yes, Forrester’s talking about the credit scoring people, has taken its years of experience in actionable predictive analytics, built a solution and taken it to the Cloud where its use is frictionless and available to others. It could be a gem for data scientists who are continuously building and deploying models. FICO’s market offering has lots of room for improvement in areas like Data and Business Applications, though.

Agnoss aims to make it easier for non-data scientists to get busy with predictive analytics tools via support services and intuitive interfaces for developing predictive models. While the solution provider has focused its go-to-market offerings on decision trees until recently, it now also offers Strategy Tree capability, which helps advanced users to create complex cohorts from trees.

Alpine Data Labs offers “the most comprehensive collaboration tools of all the vendors in the Forrester Wave, and still manages to make the interface simple and familiar to users of any mainstream social media site,” wrote Gualtieri and Curran in the report. The fact that not enough people buy Alpine products seems to be the problem. It might be a matter of acquisition and pricing options, it’s here that Alpine scores lowest among all vendors in the Wave.

Dell plans to go big in the big data and predictive analytics game. It bought its way into the market when it acquired Statistica. “Statistica has a comprehensive library of analysis algorithms and modeling tools and a significant installed base,” say the authors. Dell scores second lowest among Wave vendors in architecture, so it has a lot of room for improvement there.

KNIME is the open source contender in Forrester’s Wave. And though “free” isn’t the selling point of open source, it rates; perhaps only second to the passion of its developers. “KNIME’s flexible platform is supported by a community of thousands of developers who drive the continued evolution of the platform by contributing extensions essential to the marketplace: such as prebuilt industry APIs, geospatial mapping, and decision tree ensembles,” write the researchers. KNIME competes with only Microsoft for a low score on business applications and is in last place, by itself, when it comes to architecture. It has a perfect score when it comes to data.

Make Room for Contenders

Both Microsoft and Predixion Software bring something to the market that others do not.

They seem to be buds waiting to blossom. Microsoft, for its part, has its new Azure Machine Learning offering as well as the assets of Revolution Analytics, which it recently acquired. Not only that, but the company’s market reach and deep pockets cannot be overstated. While Microsoft brought home lower scores than many of the vendors evaluated in this Wave, it’s somewhat forgivable because its big data, predictive analytics solutions may be the youngest.

Predixion Software, according to Forrester, offers a unique tool, namely (MSLM), a machine learning semantic model that packages up transformations, analysis, and scoring of data that can be deployed in.NET or Java OSGI containers. “This means that users can embed entire predictive workflows in applications,” says the report.

Plenty of Good Choices

The key takeaways from Forrester’s research indicate that more classes of users can now have access to “modern predictive power” and that predictive analytics now allow organizations to embed intelligence and insight.

The analysts, of course, suggest that you download their report, which, in fact, might be worthwhile doing. This is a rapidly evolving market and vendors are upgrading their products at a rapid clip. We know this because there’s rarely a week where a new product announcement or feature does not cross our desks.

And if it’s true that the organizations who best leverage data will win the future, then working with the right tools might be an important differentiator.

Originally posted via “3 Vendors Lead the Wave for Big Data Predictive Analytics”


Source: 3 Vendors Lead the Wave for Big Data Predictive Analytics

Jul 13, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)


Accuracy check  Source


 Using Emojis to Boost Sentiment Analysis – Datanami Under  Sentiment Analysis

 Software-defined secure networking is ideal for hybrid cloud security – CyberScoop Under  Hybrid Cloud

 Why Big Data Wasn’t Trump’s Achilles Heel After All – Forbes Under  Big Data Analytics

More NEWS ? Click Here


Introduction to Apache Spark


Learn the fundamentals and architecture of Apache Spark, the leading cluster-computing framework among professionals…. more


Introduction to Graph Theory (Dover Books on Mathematics)


A stimulating excursion into pure mathematics aimed at “the mathematically traumatized,” but great fun for mathematical hobbyists and serious mathematicians as well. Requiring only high school algebra as mathematical bac… more


Strong business case could save your project
Like anything in corporate culture, the project is oftentimes about the business, not the technology. With data analysis, the same type of thinking goes. It’s not always about the technicality but about the business implications. Data science project success criteria should include project management success criteria as well. This will ensure smooth adoption, easy buy-ins, room for wins and co-operating stakeholders. So, a good data scientist should also possess some qualities of a good project manager.


Q:What is latent semantic indexing? What is it used for? What are the specific limitations of the method?
A: * Indexing and retrieval method that uses singular value decomposition to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text
* Based on the principle that words that are used in the same contexts tend to have similar meanings
* “Latent”: semantic associations between words is present not explicitly but only latently
* For example: two synonyms may never occur in the same passage but should nonetheless have highly associated representations

Used for:

* Learning correct word meanings
* Subject matter comprehension
* Information retrieval
* Sentiment analysis (social network analysis)



@AnalyticsWeek: Big Data at Work: Paul Sonderegger

 @AnalyticsWeek: Big Data at Work: Paul Sonderegger

Subscribe to  Youtube


You can use all the quantitative data you can get, but you still have to distrust it and use your own intelligence and judgment. – Alvin Tof


#FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

 #FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership


iTunes  GooglePlay


Facebook users send on average 31.25 million messages and view 2.77 million videos every minute.

Sourced from: Analytics.CLUB #WEB Newsletter

See what you never expected with data visualization

Written by Natan Meekers

A strong quote from John Tukey explains the essence of data visualization:

“The greatest value of a picture is when it forces us to notice what we never expected to see.”

Tukey was a famous American mathematician who truly understood data – its structure, patterns and what to look for. Because of that, he was able to come up with some great innovations, like the box plot. His powerful one-liner is a perfect introduction to this topic, because it points out the value of seeing things that we never expected to see.

With the large amounts of data generated every day, it’s impossible to keep up by looking at numbers only. Applying simple visualization techniques helps us to “hear” what the data is telling us. This is because our brain exists in two parts. The left side is logical, the mathematician; the right side is creative, the artist.

Mercedes-Benz, the luxury carmaker, illustrated the value of visualization in its “Whole Brain” campaign in 2012. Ads showed how the two opposing parts of the brain complement each other. They juxtaposed the left side responsible for logic and analysis with the creative and intuitive right side. Through visualization, the campaign communicated that Mercedes-Benz, like the brain, is a combination of opposites. Working together, they create technological innovation, breakthrough engineering, inspiring design and passion.

Mercedes ad depicting left and right brain functions

Visualizing data, i.e. combining left and right sides, lets you optimize decision-making and speed up ad-hoc analysis. That helps you see trends as they’re occurring and take immediate action when needed.

The most impressive thing is that accurate and informative visualizations are just a click away for you, even as a business user. NO technical background or intensive training required at all. With self-service capabilities of modern tools, you can get much more value out of your data just by pointing and clicking.

Data visualization plays a critical role in a world where so much data is pouring in from so many sources every day. It helps us to understand that data more easily. And we can detect hidden patterns, trends or events quicker than ever before. So start using your data TODAY for what it’s really worth.

To read the original article on S.A.S. Voices, click here.

Source: See what you never expected with data visualization by analyticsweekpick

Democratizing Self-Service Cognitive Computing Analytics with Machine Learning

There are few areas of the current data landscape that the self-service movement has not altered and positioned firmly within the grasp of the enterprise and its myriad users, from novices to the most accomplished IT personnel.

One can argue that cognitive computing and its self-service analytics have always been a forerunner of this effort, as their capability of integrating and analyzing disparate sources of big data to deliver rapid results with explanations and recommendations proves.

Historically, machine learning and its penchant for predictive analytics has functioned as the most accessible of cognitive computing technologies that include natural language processing, neural networks, semantic modeling and vocabularies, and other aspects of artificial intelligence. According to indico co-founder and CEO Slater Victoroff, however, the crux of machine learning’s utility might actually revolve around deep learning and, specifically, transfer learning.

By accessing these technologies at scale via the cloud, enterprises can now deploy cognitive computing analytics on sets of big data without data scientists and the inordinate volumes of data required to develop the models and algorithms that function at the core of machine learning.

From Machine Learning to Deep Learning
The cost, scale, and agility advantages of the cloud have resulted in numerous Machine Learning-as-a-Service vendors, some of which substantially enhance enterprise utility with Deep Learning-as-a-Service. Machine learning is widely conceived of as a subset of predictive analytics in which existing models of algorithms are informed by the results of previous ones, so that future models are formed quicker to tailor analytics according to use case or data type. According to Slater, deep learning algorithms and models “result in better accuracies for a wide variety of analytical tasks.” Largely considered a subset of machine learning, deep learning is understood as a more mature form of the former. That difference is conceptualized in multiple ways, including “instead of trying to handcraft specific rules to solve a given problem (relying on expert knowledge), you let the computer solve it (deep learning approach),” Slater mentioned.

Transfer Learning and Scalable Advantages
The parallel is completed with an analogy of machine learning likened to an infant and deep learning likened to a child. Whereas an infant must be taught everything, “a child has automatically learnt some approximate notions of what things are, and if you can build on these, you can get to higher level concepts much more efficiently,” Slater commented. “This is the deep learning approach.” That distinction in efficiency is critical in terms of scale and data science requirements, as there is a “100 to 100,000 ratio” according to Slater on the amounts of data required to form the aforementioned “concepts” (modeling and algorithm principles to solve business problems) with a deep learning approach versus a machine learning one. That difference is accounted for by transfer learning, a subset of deep learning that “lets you leverage generalized concepts of knowledge when solving new problems, so you don’t have to start from scratch,” Slater revealed. “This means that your training data sets can be one, two or even three orders of magnitude smaller in size and this makes a big difference in practical terms.”

Image and Textual Analytics on “Messy” Unstructured Data
Those practical terms expressly denote the difference between staffing multiple data scientists to formulate algorithms on exorbitant sets of big data, versus leveraging a library of preset models of service providers tailored to vertical industries and use cases. These models are also readily modified by competent developers. Providers such as indico offer these solutions for companies tasked with analyzing the most challenging “messy data sets”, as characterized by Slater. In fact, the vast forms of unstructured text and image analytics required of unstructured data is ideal for deep learning and transfer learning. “Messy data, by nature, is harder to cope with using handcrafted rules,” Slater observed. “In the case of images things like image quality, lighting conditions, etc. introduce noise. Sarcasm, double negatives, and slang are examples of noise in the text domain. Deep learning allows us to effectively work with real world noisy data and still extract meaningful signal.”

The foregoing library of models utilizing this technology can derive insight from an assortment of textual and image data including characteristics of personality, emotions, various languages, content filtering, and many more. These cognitive computing analytic capabilities are primed for social media monitoring and sentiment analysis in particular for verticals such as finance, marketing, public relations, and others.

Sentiment Analysis and Natural Language Processing
The difference with a deep learning approach is both in the rapidity and the granular nature of the analytics performed. Conventional natural language processing tools are adept at identifying specific words and spellings, and at determining their meaning in relation to additional vocabularies and taxonomies. NLP informed by deep learning can expand this utility to include entire phrases and a plethora of subtleties such as humor, sarcasm, irony and meaning that is implicit to native speakers of a particular language. Such accuracy is pivotal to gauging sentiment analysis.

Additionally, the necessity of image analysis as part of sentiment analysis and other forms of big data analytics is only increasing. Slater characterized this propensity of deep learning in terms of popular social media platforms such as Twitter, in which images are frequently incorporated. Image analysis can detect when someone is holding up a “guitar, and writes by it ‘oh, wow’,” Slater said. Without that image analysis, organizations lose the context of the text and the meaning of the entire post. Moreover, image analysis technologies can also discern meaning in various facial expressions, gestures, and other aspects of text that yield insight.

Cognitive Computing Analytics for All
The provisioning of cognitive computing analytics via MLaaS and DLaaS illustrates once again exactly how pervasive the self-service movement is. It also demonstrates the democratization of analytics and the fact that with contemporary technology, data scientists and massive sets of big data (augmented by expensive physical infrastructure) are not required to reap the benefits of some of the fundamental principles of cognitive computing and other applications of semantic technologies. Those technologies and their applications, in turn, are responsible for increasing the very power of analytics and of data-driven processes themselves.

In fact, according to Cambridge Semantics VP of Marketing John Rueter, many of the self-service facets of analytics that are powered by semantic technologies “are built for the way that we think and the way that we analyze information. Now, we’re no longer held hostage by the technology and by solving problems based upon a technological approach. We’re actually addressing problems with an approach that is more aligned with the way we think, process, and do analysis.”


Jul 06, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)


Statistics  Source

[ AnalyticsWeek BYTES]

>> Big Data Provides Big Insights for U.S. Hospitals by bobehayes

>> 3 S for Building Big Data Analytics Tool of the Future by v1shal

>> Untangling Big Data for Digital Marketing by analyticsweekpick

Wanna write? Click Here


 Insurers turn to outsourcing to shore up data security – Information Management Under  Data Security

 Clarabridge Dials Up Customer Connections | CustomerThink – Customer Think Under  Sentiment Analysis

 RISELab Takes Flight at UC Berkeley – Datanami Under  Big Data Security

More NEWS ? Click Here


Statistical Thinking and Data Analysis


This course is an introduction to statistical data analysis. Topics are chosen from applied probability, sampling, estimation, hypothesis testing, linear regression, analysis of variance, categorical data analysis, and n… more


Introduction to Graph Theory (Dover Books on Mathematics)


A stimulating excursion into pure mathematics aimed at “the mathematically traumatized,” but great fun for mathematical hobbyists and serious mathematicians as well. Requiring only high school algebra as mathematical bac… more


Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.


Q:Why is naive Bayes so bad? How would you improve a spam detection algorithm that uses naive Bayes?
A: Naïve: the features are assumed independent/uncorrelated
Assumption not feasible in many cases
Improvement: decorrelate features (covariance matrix into identity matrix)



Agile Data Warehouse Design for Big Data Presentation

 Agile Data Warehouse Design for Big Data Presentation

Subscribe to  Youtube


Big Data is not the new oil. – Jer Thorp


#BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs


iTunes  GooglePlay


235 Terabytes of data has been collected by the U.S. Library of Congress in April 2011.

Sourced from: Analytics.CLUB #WEB Newsletter

What Is Happening With Women Entrepreneurs? [Infographics]


On this International Women’s Day, it might be a wise idea to learn how women is shaping the entrepreneurial landscape. Not only is the impact impressive, growing but it is also building sustained growth. In some aspects, the impact is equal or better than the male counterparts.

Women entrepreneurs has been on the rise for sometime, more specifically, we’ve grown twice as fast as men between 1997 and 2007, at the pace of 44% growth in women-owned businesses. if it is not a cool stats, not sure what else is?

There are a Dozen interesting factoids about how women is shaping business landscape:

  1. In 2005, there were 7 CEO’s in Fortune 500. As of May 2011, there were 12 CEO’s in Fortune 500 companies, not many but growing.
  2. Approximately 32% of women business owners believe that being a woman in a male-dominated industry is beneficial.
  3. The number of women-owned companies with 100 or more employees has increased at nearlytwice the growth rate of all other companies.
  4. The vast majority (83%) of women business owners are personally involved in selecting and purchasing technology for their businesses.
  5. The workforces of women-owned firms show more gender equality. Women business owners overallemploy a roughly balanced workforce (52% women, 48% men), while men business owners employ 38% women and 62% men, on average.
  6. 3% of all women-owned firms have revenues of $1 million or more compared with 6% of men-owned firms.
  7. Women business owners are nearly twice as likely as men business owners to intend to pass the business on to a daughter or daughters (37% vs. 19%).
  8. Between 1997 and 2002, women-owned firms increased their employment by 70,000, whereas firms owned by men lost 1 million employees.
  9. One in five firms with revenue of $1 million or more is woman-owned.
  10. Women owners of firms with $1 million or more in revenue are more likely to belong to formal business organizations, associations or networks than other women business owners (81% vs. 61%).
  11. Women-owned firms in the U.S. are more likely than all firms to offer flex-time, tuition reimbursement and, at a smaller size, profit sharing to their workers.
  12. 86% of women entrepreneurs say they use the same products and services at home that they do in their business, for familiarity and convenience.

Road is well traveled and boy we have covered a distance. Let us embrace and keep breaking the glass ceiling. At the end, Happy International Women’s Day you all!

Infographic: Women in Business
Courtesy of: CreditDonkey

Source: What Is Happening With Women Entrepreneurs? [Infographics]

7 Characteristics to Look Before Hiring a Data Scientist


Data is being collected in droves, but most of the time, people don’t know what to do with it. That’s why data scientists are hot commodities in the startup world right now. In fact, between 2003 and 2013, employment in data industries grew about 21 percent — nearly 16 percent more than overall employment growth. It’s a fairly new concept, but these people are so valuable because they understand the significance of data for your business and how you can use it.

Using analytics, firms can discover patterns and stories in data, build the infrastructure needed to properly collect and store it, inform business decisions and guide strategy. Access to sufficient and robust data is vital to sustained startup growth.

Companies need to incorporate data science into their business models as early as possible while they’re taking risks and making crucial decisions about the future. But how do you know whether your company is ready to go the extra mile and hire a data scientist?

First, you need to make sure you can afford to hire one. On average, a single data scientist costs a company $100,000 annually. A team of data engineers, machine learning experts and modelers can cost millions.

Smaller companies may need to create software solutions and invest time in building revenue to ensure they can actually utilize a data scientist’s skills. Tools such as Tableau, Qlik and Google Charts can help you plot and visualize the results of your data collection, connect this information to dashboards and quickly glean actionable insights.

Once your business is ready to make a larger investment to gain a competitive edge, there are several key traits to seek out in potential candidates. The best data scientists are:

1. Skilled.
All the data in the world won’t illuminate much if the scientist analyzing it doesn’t possess practical IT skills, experience with the tools mentioned above and a thorough understanding of basic security practices. A solid background in mathematics and statistics is also an indispensable trait; this demonstrates an intellectual rigor and the ability to confidently synthesize and massage many types of data sets.

2. Aware.
Armed with a thorough understanding of the pressures inherent to certain industries, skilled data scientists can effectively enlighten the decision-making process. To this end, interview recruits about how they view the competitive climate at the moment.

3. Proven.
A good way to guarantee you hire the best data scientist for your needs is to ask each contender to develop a sample presentation based on a specific set of data you provide. Then, pursue the candidates who convey real vision, robust understanding and deep insight.

Related: 4 Things a Data Scientist Can Do for Entrepreneurs

4. Entrepreneurial.
Data scientists energize enterprise through discovery. Natural curiosity and enthusiasm for solving big problems coupled with an ability to transform data into a product may place one candidate above the rest.

5. Agile.
Just as successful startup teams depend on across-the-board versatility, data scientists must be agile enough to quickly modify their methods to suit changes within a particular industry.

6. Intuitive.
You want this person to beat you to the punch when it comes to anticipating questions that data could answer. Look for someone who has a keen sense for future data applications.

7. Strong communication.
Insight that can’t be expressed is worthless. Good data scientists are able to uncover data patterns and are willing to explain those patterns in clear and helpful ways through thoughtful and open communication. They should know how to present visualizations of data and tell a story through numbers.

The perfect complement to a scaling, booming startup is a data scientist with a killer skill set. By sharing the burden and excitement of making crucial business decisions, this single hire can take your startup from data zero to data hero in no time.

Note: This article originally appeared in Entrepreneur. Click for link here.

Source by analyticsweekpick

Jun 29, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)


Insights  Source


 Defense contractor stored intelligence data in Amazon cloud unprotected – Ars Technica Under  Cloud

 Qatar firms can meet hybrid cloud challenges – Peninsula On-line Under  Hybrid Cloud

 The tricky, personal politics of cloud security | Network World – Network World Under  Cloud Security

More NEWS ? Click Here


Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz


Use data to build a better startup faster in partnership with Geckoboard… more


The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World


In the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Mast… more


Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.


Q:How frequently an algorithm must be updated?
A: You want to update an algorithm when:
– You want the model to evolve as data streams through infrastructure
– The underlying data source is changing
– Example: a retail store model that remains accurate as the business grows
– Dealing with non-stationarity

Some options:
– Incremental algorithms: the model is updated every time it sees a new training example
Note: simple, you always have an up-to-date model but you can’t incorporate data to different degrees.
Sometimes mandatory: when data must be discarded once seen (privacy)
– Periodic re-training in “batch” mode: simply buffer the relevant data and update the model every-so-often
Note: more decisions and more complex implementations

How frequently?
– Is the sacrifice worth it?
– Data horizon: how quickly do you need the most recent training example to be part of your model?
– Data obsolescence: how long does it take before data is irrelevant to the model? Are some older instances
more relevant than the newer ones?
Economics: generally, newer instances are more relevant than older ones. However, data from the same month, quarter or year of the last year can be more relevant than the same periods of the current year. In a recession period: data from previous recessions can be more relevant than newer data from different economic cycles.



Decision-Making: The Last Mile of Analytics and Visualization

 Decision-Making: The Last Mile of Analytics and Visualization

Subscribe to  Youtube


He uses statistics as a drunken man uses lamp posts—for support rather than for illumination. – Andrew Lang


#FutureOfData Podcast: Conversation With Sean Naismith, Enova Decisions

 #FutureOfData Podcast: Conversation With Sean Naismith, Enova Decisions


iTunes  GooglePlay


More than 200bn HD movies – which would take a person 47m years to watch.

Sourced from: Analytics.CLUB #WEB Newsletter

8 Data Security Tips For Small Businesses

In 2015, more than 169 million personal records were exposed, ranging from financial records, trade secrets, and important files from education, government, and healthcare sector. Though big organizations are the usual victims of data breach, there is an ongoing trend which shows that small businesses are rapidly becoming a much-favored victim by hackers nowadays. 2017 should be high on charts for businesses to fix their security loopholes.

Here’s a great cheat sheet on 8 data security tips that will come handy in case one needs to revisit their data security strategy.

8 Pointers are:
Designate Computer Access Levels
Enable Two-Factor Authentication
Secure Wireless Network Connection
Use SSL for exchanging Sensitive Data
Use Trusted Resources for storage
Store Encrypted Data Backups
Make your staff aware

8 Data Security Tips For Small Businesses
8 Data Security Tips For Small Businesses

Originally Posted at: 8 Data Security Tips For Small Businesses