May 03, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)


statistical anomaly  Source


More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Santa 2.0, What Santa could do with technology by v1shal

>> How Airbnb Uses Big Data And Machine Learning To Guide Hosts To The Perfect Price by analyticsweekpick

>> 80/20 Rule For Startups by v1shal

Wanna write? Click Here


 Marsh Enhances Cyber Risk Products to Address Business Interruption Risks – Insurance Journal Under  Risk Analytics

 Social Media Analytics Market: Rapidly Growing Dynamic Markets – CMFE News (press release) (blog) Under  Social Analytics

 Alabama Passes Data Security and Data Breach Notification Statute – JD Supra (press release) Under  Data Security

More NEWS ? Click Here


Python for Beginners with Examples


A practical Python course for beginners with examples and exercises…. more


The Industries of the Future


The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more


Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.


Q:Give examples of data that does not have a Gaussian distribution, nor log-normal?
A: * Allocation of wealth among individuals
* Values of oil reserves among oil fields (many small ones, a small number of large ones)



@ChuckRehberg / @TrigentSoftware on Translating Technology to Solve Business Problems #FutureOfData #Podcast

 @ChuckRehberg / @TrigentSoftware on Translating Technology to Solve Business Problems #FutureOfData #Podcast

Subscribe to  Youtube


Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom. – Clifford Stoll


Want to fix #DataScience ? fix #governance by @StephenGatchell @Dell #FutureOfData #Podcast

 Want to fix #DataScience ? fix #governance by @StephenGatchell @Dell #FutureOfData #Podcast


iTunes  GooglePlay


Brands and organizations on Facebook receive 34,722 Likes every minute of the day.

Sourced from: Analytics.CLUB #WEB Newsletter

2016 Trends in Big Data: Insights and Action Turn Big Data Small

Big data’s salience throughout the contemporary data sphere is all but solidified. Gartner indicates its technologies are embedded within numerous facets of data management, from conventional analytics to sophisticated data science issues.

Consequently, expectations for big data will shift this year. It is no longer sufficient to justify big data deployments by emphasizing the amount and sundry of types of data these technologies ingest, but rather the specific business value they create by offering targeted applications and use cases providing, ideally, quantifiable results.

The shift in big data expectations, then, will go from big to small. That transformation in the perception and deployments of big data will be spearheaded by numerous aspects of data management, from the evolving roles of Chief Data Officers to developments in the Internet of Things. Still, the most notable trends impacting big data will inevitably pertain to the different aspects of:

• Ubiquitous Machine Learning: Machine learning will prove one of the most valuable technologies for reducing time to insight and action for big data. Its propensity for generating future algorithms based on the demonstrated use and practicality of current ones can improve analytics and the value it yields. It can also expedite numerous preparation processes related to data integration, cleansing, transformation and others, while smoothing data governance implementation.
• Cloud-Based IT Outsourcing: The cloud benefits of scale, cost, and storage will alter big data initiatives by transforming IT departments. The new paradigm for this organizational function will involve a hybridized architecture in which all but the most vital and longstanding systems are outsourced to complement existing infrastructure.
• Data Science for Hire: Whereas some of the more complicated aspects of data science (tailoring solutions to specific business processes) will remain tenuous, numerous aspects of this discipline have become automated and accelerated. The emergence of a market for algorithms, Machine Learning-as-a-Service, and self-service data discovery and management tools will spur this trend.

From Machine Learning to Artificial Intelligence
The correlation between these three trends is probably typified by the increasing prevalence of machine learning, which is an integral part of many of the analytics functions that IT departments are outsourcing and aspects of data science that have become automated. The expectations for machine learning will truly blossom this year, with Gartner offering numerous predictions for the end of the decade in which elements of artificial intelligence are normative parts of daily business activities. The projected expansion of the IoT and the automated activity required of the predictive analytics required for its continued growth will increase the reliance on machine learning, while its applications in various data preparation and governance tools are equally as vital.

Nonetheless, the chief way in which machine learning will help to shift the focus of big data from sprawling to narrow relates to the fact that it either eschews or hastens human involvement in all of the aforementioned processes, and in many others as well. Forrester predicted that: “Machine learning will replace manual data wrangling and data governance dirty work…The freeing up of time will accelerate the execution of data and analytics strategies, allowing organizations to get to the good stuff, taking actions and driving better business outcomes based on the data.” Machine learning will enable organizations to spend less time managing their data and more time creating action from the insights they provide.

Accelerating data management processes also enables users to spend more time understanding their data. John Rueter, Vice President of Marketing at Cambridge Semantics, denoted the importance of establishing the context and meaning of data. “Everyone is in such a race to collect as much data as they can and store it so they can get to it when they want to, when oftentimes they really aren’t thinking ahead of time about what they want to do with it, and how it is going to be used. The fact of the matter is what’s the point of collecting all this data if you don’t understand it?”

Cloud-Based IT
The trend of outsourcing IT to the cloud is evinced in a number of different ways, from a distributed model of data management to one in which IT resources are more frequently accessed through the cloud. The variation of basic data management services that the enterprise is able to outsource via the cloud (including analytics, integration, computations, CRM, etc.) are revamping typical architectural concerns, which are increasingly involving the cloud. These facts are substantiated by IDC’s predictions that, “By 2018, at least 50 % of IT spending will be cloud based. By 2018, 65 % of all enterprise IT assets will be housed offsite and 33% of IT staff will be employed by third-party, managed service providers.”

The impact of this trend goes beyond merely extending the cloud’s benefits of decreased infrastructure, lower costs, and greater agility. It means that a number of pivotal facets of data management will require less daily manipulating on the part of the enterprise, and that end users can implements the results of those data driven processes quicker and for more specific use cases. Additionally, this trend heralds a fragmentation of the CDO role. The inherent decentralization involved in outsourcing IT functions through the cloud will be reflected in an evolution of this position. The foregoing Forrester post notes that “We will likely see fewer CDOs going forward but more chief analytics officers, or chief data scientists. The role will evolve, not disappear.”

Self-Service Data Science
Data science is another realm in which the other two 2016 trends in big data coalesce. The predominance of machine learning helps to improve the analytical insight gleaned from data science, just as a number of key attributes of this discipline are being outsourced and accessed through the cloud. Those include numerous facets of the analytics process including data discovery, source aggregation, multiple types of analytics and, in some instances, even analysis of the results themselves. As Forrester indicated, “Data science and real-time analytics will collapse the insights time-to-market. The trending of data science and real-time data capture and analytics will continue to close the gaps between data, insight and action. In 2016, Forrester predicts: “A third of firms will pursue data science through outsourcing and technology. Firms will turn to insights services, algorithms markets, and self-service advanced analytics tools, and cognitive computing capabilities, to help fill data science gaps.”

Self-service data science options for analytics encompass myriad forms, from providers that provision graph analytics, Machine Learning-as-a-Service, and various forms of cognitive computing. The burgeoning algorithms market is a vital aspect of this automation of data science, and enables companies to leverage previously existent algorithms with their own data. Some algorithms are stratified according to use cases for data according to business unit or vertical industry. Similarly, Machine Learning-as-a-Service options provide excellent starting points for organizations to simply add their data and reap predictive analytics capabilities.

Targeting Use Cases to Shrink Big Data
The principal point of commonality between all of these trends is the furthering of the self-service movement and the ability it gives end users to hone in on the uses of data, as opposed to merely focusing on the data itself and its management. The ramifications are that organizations and individual users will be able to tailor and target their big data deployments for individualized use cases, creating more value at the departmental and intradepartmental levels…and for the enterprise as a whole. The facilitation of small applications and uses of big data will justify this technology’s dominance of the data landscape.

Source: 2016 Trends in Big Data: Insights and Action Turn Big Data Small

The Big Data Game-Changer: Public Data and Semantic Graph Databases

By Dr. Jans Aasman, Ph.D, CEO of Franz Inc.

Big Data’s influence across the data landscape is well known, and virtually undeniable. Organizations are adopting a greater diversity of sources and data structures in quantities that are rapidly increasing while they want the results of analytics faster and faster.

Of great importance is also how big data’s influence is shaping that landscape. Gartner asserted, “The number and variety of public-facing open datasets and Web APIs published by all tiers of governments worldwide continues to increase.” The inclusion of the growing variety of public data sources shows that big data is actually also big public data.

The key is to expeditiously integrate that data—in a well-governed, sustainable manner—with proprietary enterprise data for timely analytic action. Semantic graph database technology is built to facilitate data integration and as such surpasses virtually every other method for leveraging public data. The recent explosion of public sources of big data is effectively dictating the need for semantic graph databases.

The Smart Data Approach
More than any other type of analytics, public big data analysis and integration comprehensively utilizes the self-describing, smart data technologies on which semantic graph databases hinge. The exorbitant volumes and velocities of big data benefit from this intrinsic understanding of specific data elements that are expressed in semantic statements known as triples. But it’s the growing variety of data types included in integrating public and private big data sources that exploit this self-identifying penchant of semantic data—especially when linking disparate data sets.

This facet of smart data proves invaluable when modeling and integrating structured and unstructured (public) data during the analytic preparation process. The same methods by which proprietary data are modeled can be used to incorporate public data sources in a uniform way. When integrating unstructured or semi-structured public data with structured data for fraud detection, hedge fund analysis or other use cases, semantic graph databases’ propensity to readily glean the meaning of data and relationship between data elements is critical to immediate responses.

Triple Intelligence
Triple stores are integral to incorporating public big data with internal company sources because they provide a form of machine intelligence that is essential to expanding the understanding of how data relates to each other. Every semantic statement provides meaning about data. Triple stores utilize these statements as the basis for providing further inferences about the way that data interrelates.

For example, say the enterprise data warehouse of a hospital has data about a patient that will be expressed in triples like: patient X takes Drug Aspirin and patient X takes Drug Insulase. A publicly available medical drug database will have triples such as:  Chlorpropamide has the brand name Insulase and ChrolPropamide has Drug Interaction with Aspirin. The reasoning in the triple stores will instantly conclude that Patient X has a problem.

Such an example illustrates the usefulness of triple stores when contextualizing public big data integrated with internal data. Firstly, this type of basic inferencing is not possible with other technologies, including both relational and graph databases that do not involve semantics. The latter are focused on the graph’s nodes and their properties; semantic graph databases focus on the relationships between nodes (the edges). Furthermore, such intelligent inferencing illustrates the fact that these stores can actually learn. Finally, such inferencing is invaluable when leveraged at scale and accounting for the numerous subtleties existent between big data, and is another way of deriving meaning from data in low latency production environments.

Public Big Data
Much of the value that public big data delivers pertains to general knowledge generated by researchers, scientists and data analysts from the government. By integrating this knowledge with big data within the enterprise we can build new applications that benefit the enterprise and society.

Dr. Jans Aasman, Ph.d is the CEO of Franz Inc., an early innovator in Artificial Intelligence and leading supplier of Semantic Graph Database technology.

Source: The Big Data Game-Changer: Public Data and Semantic Graph Databases by jaasman