Aug 31, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Human resource  Source

[ AnalyticsWeek BYTES]

>> List of VC firms in Boston by v1shal

>> For the airline industry, big data is cleared for take-off by analyticsweekpick

>> THE FUTURE OF BIG DATA by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Little data analytics – TechSpot Under  Big Data Analytics

>>
 The coal miner who became a data miner – May. 17, 2017 – CNNMoney Under  Data Scientist

>>
 The Amazon effect is hitting the apparel industry – CNBC Under  Sales Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

R, ggplot, and Simple Linear Regression

image

Begin to use R and ggplot while learning the basics of linear regression… more

[ FEATURED READ]

Machine Learning With Random Forests And Decision Trees: A Visual Guide For Beginners

image

If you are looking for a book to help you understand how the machine learning algorithms “Random Forest” and “Decision Trees” work behind the scenes, then this is a good book for you. Those two algorithms are commonly u… more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:What are the drawbacks of linear model? Are you familiar with alternatives (Lasso, ridge regression)?
A: * Assumption of linearity of the errors
* Can’t be used for count outcomes, binary outcomes
* Can’t vary model flexibility: overfitting problems
* Alternatives: see question 4 about regularization

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Everybody gets so much information all day long that they lose their common sense. – Gertrude Stein

[ PODCAST OF THE WEEK]

#FutureOfData Podcast: Conversation With Sean Naismith, Enova Decisions

 #FutureOfData Podcast: Conversation With Sean Naismith, Enova Decisions

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Poor data can cost businesses 20%–35% of their operating revenue.

Sourced from: Analytics.CLUB #WEB Newsletter

Improving Big Data Governance with Semantics

By Dr. Jans Aasman Ph.d, CEO of Franz Inc.

Effective data governance consists of protocols, practices, and the people necessary for implementation to ensure trustworthy, consistent data. Its yields include regulatory compliance, improved data quality, and data’s increased valuation as a monetary asset that organizations can bank on.

Nonetheless, these aspects of governance would be impossible without what is arguably its most important component: the common terminologies and definitions that are sustainable throughout an entire organization, and which comprise the foundation for the aforementioned policy and governance outcomes.

When intrinsically related to the technologies used to implement governance protocols, terminology systems (containing vocabularies and taxonomies) can unify terms and definitions at a granular level. The result is a greatly increased ability to tackle the most pervasive challenges associated with big data governance including recurring issues with unstructured and semi-structured data, integration efforts (such as mergers and acquisitions), and regulatory compliance.

A Realistic Approach
Designating the common terms and definitions that are the rudiments of governance varies according to organization, business units, and specific objectives for data management. Creating policy from them and embedding them in technology that can achieve governance goals is perhaps most expediently and sustainably facilitated by semantic technologies, which are playing an increasingly pivotal role in the overall implementation of data governance in the wake of big data’s emergence.

Once organizations adopt a glossary of terminology and definitions, they can then determine rules about terms based on their relationships to one another via taxonomies. Taxonomies are useful for disambiguation purposes and can clarify preferred labels—among any number of synonyms—for different terms in accordance to governance conventions. These definitions and taxonomies form the basis for automated terminology systems that label data according to governance standards via inputs and outputs. Ingested data adheres to terminology conventions and is stored according to preferred labels. Data captured prior to the implementation of such a system can still be queried according to the system’s standards.

Linking Terminology Systems: Endless Possibilities
The possibilities that such terminology systems produce (especially for unstructured and semi-structured big data) are virtually limitless, particularly with the linking capabilities of semantic technologies. In the medical field, a hand written note hastily scribbled by a doctor can be readily transcribed by the terminology system in accordance to governance policy with preferred terms, effectively giving structure to unstructured data. Moreover, it can be linked to billing coding systems per business functions. That structured data can then be stored in a knowledge repository and queried along with other data, adding to the comprehensive integration and accumulation of data that gives big data its value.

Focusing on common definitions and linking terminology systems enables organizations to leverage business intelligence and analytics on different databases across business units. This method is also critical for determining customer disambiguation, a frequently occurring problem across vertical industries. In finance, it is possible for institutions with numerous subsidiaries and acquisitions (such as Citigroup, Citibank, Citi Bike, etc.) to determine which subsidiary actually spent how much money with the parent company and additional internal, data-sensitive problems by using a common repository. Also, linking the different terminology repositories for these distinct yet related entities can achieve the same objective.

The primary way in which semantics addresses linking between terminology systems is by ensuring that those systems are utilizing the same words and definitions for the commonality of meaning required for successful linking. Vocabularies and taxonomies can provide such commonality of meaning, which can be implemented with ontologies to provide a standards-based approach to disparate systems and databases.

Subsequently, all systems that utilize those vocabularies and ontologies can be linked. In finance, the Financial Industry Business Ontology (FIBO) is being developed to grant “data harmonization and…the unambiguous sharing of meaning across different repositories.” The life sciences industry is similarly working on industry wide standards so that numerous databases can be made available to all within this industry, while still restricting access to internal drug discovery processes according to organization.

Regulatory Compliance and Ontologies
In terms of regulatory compliance, organizations are much more flexible and celeritous to account for new requirements when data throughout disparate systems and databases are linked and commonly shared—requiring just a single update as opposed to numerous time consuming updates in multiple places. Issues of regulatory compliance are also assuaged in a semantic environment through the use of ontological models, which provide the schema that can create a model specifically in adherence to regulatory requirements.

Organizations can use ontologies to describe such requirements, then write rules for them that both restrict and permit access and usage according to regulations. Although ontological models can also be created for any other sort of requirements pertaining to governance (metadata, reference data, etc.) it is somewhat idealistic to attempt to account for all facets of governance implementation via such models. The more thorough approach is to do so with terminology systems and supplement them accordingly with ontological models.

Terminologies First
The true value in utilizing a semantic approach to big data governance that focuses on terminology systems, their requisite taxonomies, and vocabularies pertains to the fact that this method is effective for governing unstructured data. Regardless of what particular schema (or lack thereof) is available, organizations can get their data to adhere to governance protocols by focusing on the terms, definitions, and relationships between them. Conversely, ontological models have a demonstrated efficacy with structured data. Given the fact that the majority of new data created is unstructured, the best means of wrapping effective governance policies and practices around them is through leveraging these terminology systems and semantic approaches that consistently achieve governance outcomes.

About the Author: Dr. Jans Aasman Ph.d is the CEO of Franz Inc., an early innovator in Artificial Intelligence and leading supplier of Semantic Graph Database technology. Dr. Aasman’s previous experience and educational background include:
• Experimental and cognitive psychology at the University of Groningen, specialization: Psychophysiology, Cognitive Psychology.
• Tenured Professor in Industrial Design at the Technical University of Delft. Title of the chair: Informational Ergonomics of Telematics and Intelligent Products
• KPN Research, the research lab of the major Dutch telecommunication company
• Carnegie Mellon University. Visiting Scientist at the Computer Science Department of Prof. Dr. Allan Newell

Source: Improving Big Data Governance with Semantics by jaasman

Why the time is ripe for security behaviour analytics

65299632
65299632

Behaviour analytics technology is being developed or acquired by a growing number of information security suppliers. In July 2015 alone, European security technology firm Balabit released a real-time user behaviour analytics monitoring tool called Blindspotter and security intelligence firm Splunk acquired behaviour analytics and machine learning firm Caspida. But what is driving this trend?

Like most trends, there is no single driver, but several key factors that come together at the same time.

In this case, storage technology has improved and become cheaper, enabling companies to store more network activity data; distributed computing capacity is enabling real-time data gathering and analysis; and at the same time, traditional signature-based security technologies or technologies designed to detect specific types of attack are failing to block increasingly sophisticated attackers.
As companies have deployed security controls, attackers have shifted focus from malware to individuals in organisations, either stealing their usernames and passwords to access and navigate corporate networks without being detected or getting their co-operation through blackmail and other forms of coercion.

Stealing legitimate user credentials for both on-premise and cloud-based services is becoming increasingly popular with attackers as a way into an organisation that enables them to carry out reconnaissance, and it is easily done, according to Matthias Maier, European product marketing manager for Splunk.

“For example, we are seeing highly plausible emails that appear to be from a company’s IT support team telling a targeted employee their email inbox is full and their account has been locked. All they need to do is type in their username and password to access the account and delete messages, but in doing so, the attackers are able to capture legitimate credentials without using any malware and access corporate IT systems undetected,” he said.

An increase in such technique by attackers is driving a growing demand from organisations for technologies such as behaviour analytics that enable them to build an accurate profile of normal business activities for all employees. This means if credentials are stolen or people are being coerced into helping attackers, these systems are able to flag unusual patterns of behaviour.

Read complete article at: http://www.computerweekly.com/news/4500251006/Why-the-time-is-ripe-for-security-behaviour-analytics

Originally Posted at: Why the time is ripe for security behaviour analytics

Aug 24, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Convincing  Source

[ AnalyticsWeek BYTES]

>> The Silent Rockstar of BigData: Machine Learning by v1shal

>> Landscape of Big Data by v1shal

>> CEOs to Employees – Vote for Romney else Face Layoffs. A Good Strategy? by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Target’s Revamped Store Customer Experience Experiments: Culturally, Are They On-Target or Off-Target? – Customer Think Under  Customer Experience

>>
 Amazon and Sears, Tales of Two Retailers – InformationWeek – InformationWeek Under  Business Analytics

>>
 Examining Strategies for Combining BI and Hadoop at Data Summit 2017 – Database Trends and Applications Under  Hadoop

More NEWS ? Click Here

[ FEATURED COURSE]

Statistical Thinking and Data Analysis

image

This course is an introduction to statistical data analysis. Topics are chosen from applied probability, sampling, estimation, hypothesis testing, linear regression, analysis of variance, categorical data analysis, and n… more

[ FEATURED READ]

Thinking, Fast and Slow

image

Drawing on decades of research in psychology that resulted in a Nobel Prize in Economic Sciences, Daniel Kahneman takes readers on an exploration of what influences thought example by example, sometimes with unlikely wor… more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:Give examples of bad and good visualizations?
A: Bad visualization:
– Pie charts: difficult to make comparisons between items when area is used, especially when there are lots of items
– Color choice for classes: abundant use of red, orange and blue. Readers can think that the colors could mean good (blue) versus bad (orange and red) whereas these are just associated with a specific segment
– 3D charts: can distort perception and therefore skew data
– Using a solid line in a line chart: dashed and dotted lines can be distracting

Good visualization:
– Heat map with a single color: some colors stand out more than others, giving more weight to that data. A single color with varying shades show the intensity better
– Adding a trend line (regression line) to a scatter plot help the reader highlighting trends

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Nathaniel Lin (@analytics123), @NFPA

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Nathaniel Lin (@analytics123), @NFPA

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data are becoming the new raw material of business. – Craig Mundie

[ PODCAST OF THE WEEK]

@AnalyticsWeek #FutureOfData with Robin Thottungal(@rathottungal), Chief Data Scientist at @EPA

 @AnalyticsWeek #FutureOfData with Robin Thottungal(@rathottungal), Chief Data Scientist at @EPA

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

14.9 percent of marketers polled in Crain’s BtoB Magazine are still wondering ‘What is Big Data?’

Sourced from: Analytics.CLUB #WEB Newsletter

Real-Time, Predictive Data Modeling

Traditionally, data modeling has been one of the most time-consuming facets of leveraging data-driven processes. This reality has become significantly aggravated by the variety of big data options, their time-sensitive needs, and the ever growing complexity of the data ecosystem which readily meshes disparate data types and IT systems for an assortment of use cases.

Attempting to design schema for such broad varieties of data in accordance with the time constraints required to act on those data and extract value from them is difficult enough in relational environments. Incorporating such pre-conceived schema with semi-structured, machine-generated data (and integrating them with structured data) complicates the process, especially when requirements dynamically change over time.

Subsequently, one of the most significant trends to impact data modeling is the emerging capability to produce schema on-the-fly based on the data themselves, which considerably accelerates the modeling process while simplifying the means of using data-centric options.

According to Loom Systems VP of Product Dror Mann, “We’ve been able to build algorithms that break the data and structure it. We break it for instance to lift the key values. We understand that this is the constant, that’s the host, that’s the celerity, that’s the message, and all the rest are just properties to explain what’s going on there.”

Algorithmic Data Modeling
The expanding reliance on algorithms to facilitate data modeling is one of the critical deployments of Artificial Intelligence technologies such as machine learning and deep learning. These cognitive computing capabilities are underpinned by semantic technologies which prove influential in on-the-fly data modeling at scale. The foregoing algorithms are effectual in such time-sensitive use cases partly because of classification technologies which “measure every type of metric in a single one” Mann explained. The automation potential of the use of classifications with AI algorithms is an integral part of hastening the data modeling process in these circumstances. As Mann observed, “For our usual customers, even if it’s a medium-sized enterprise, their data will probably create more than tens of thousands of metrics that will be measured by our software.” The classification enabled by semantic technologies allows for the underlying IT system to understand how to link the various data elements in a way which is communicable and sustainable according to the ensuing schema.

Pervasive Applicability
The result is that organizations are able to model data of various types in a way in which they are not constrained by schema, but rather mutate schema to include new data types and requirements. This ability to create schema as needed is vital to avoiding vendor lock-in and enabling various IT systems to communicate with one another. In such environments, the system “creates the schema and allows the user to manipulate the change accordingly,” Mann reflected. “It understands the schema from the data, and does some of the work of an engineer that would look at the data.” In fact, one of the primary use cases for such modeling is the real-time monitoring of IT systems which has become increasingly germane to both operations and analytics. Crucial to this process is the real-time capabilities involved, which are necessary for big data quantities and velocities. “The system ingests the data in real time and does the computing in real time,” Mann revealed. “Through the data we build a data set where we learn the pattern. From the first several minutes of viewing samples it will build a pattern of these samples and build the baseline of these metrics.”

From Predictive to Preventive
Another pivotal facet of automated data modeling fueled by AI is the predictive functionality which can prevent undesirable outcomes. These capabilities are of paramount importance in real-time monitoring of information systems for operations, and are applicable to various aspects of the Internet of Things and the Industrial Internet as well. Monitoring solutions employing AI-based data modeling are able to determine such events before they transpire due to the sheer amounts of data they are able to parse through almost instantaneously. When monitoring log data, for instance, these solutions can analyze such data and their connotations in a way which vastly exceeds that of conventional manual monitoring of IT systems. In these situations “the logs are being scanned in real time, all the time,” Mann noted. “Usually logs tell you a much richer story. If you are able to scan your logs at the information level, not just at the error level…you would be able to predict issues before they happen because the logs tell you when something is about to be broken.”

Going Forward
Data modeling is arguably the foundation of nearly every subsequent data-focused activity from integration to real-time application monitoring. AI technologies are currently able to accelerate the modeling phase in a way that enables these activities to be determined even more by the actual data themselves, as opposed to relying upon predetermined schema. This flexibility has manifold utility for the enterprise, decreases time to value, and increases employee and IT system efficiency. Its predictive potential only compounds the aforementioned boons, and could very well prove a harbinger of the future for data modeling. According to Mann:

“When you look at statistics, sometimes you can detect deviations and abnormalities, but in many cases you’re also able to detect things before they happen because you can see the trend. So when you’re detecting a trend you see a sequence of events and it’s trending up or down. You’re able to detect what we refer to as predictions which tells you that something is about to take place. Why not fix it now before it breaks?”

Source by jelaniharper

Data Scientists and the Practice of Data Science

ibminsightpanelpicI was recently involved in a couple of panel discussions on what it means to be a data scientist and to practice data science. These discussions/debates took place at IBM Insight in Las Vegas in Late October. I attended the event as IBM’s guest. The panels, moderated by Brian Fanzo, included me and these data experts:

I enjoyed our discussions and their take on the topic of data science. Our discussion was opened by the question “What is the role of a data scientist in the insight economy?” You can read each of our answers to this question on IBM’s Big Data Hub. While we come from different backgrounds, there was a common theme across our answers. We all think that data science is about finding insights in data to help make better decisions. I offered a more complete answer to that question in a prior post. Today, I want to share some more thoughts about other areas of the field of data science that we talked about in our discussions. The content below reflects my opinion.

What is a Data Scientist?

Data Scientist Skills
Figure 1. The three skills of data scientists

As more data professionals are now calling themselves data scientists, it’s important to clarify exactly what a data scientist is. One way to understand data scientists is to understand what kind of skills they bring to bear on analytics projects. It’s generally agreed that a successful data scientist is one who possesses skills across three areas: subject matter expertise in a particular field, programming/technology and statistics/math (see DJ Patil and Hilary Mason’s take, Drew Conway’s Data Science Venn Diagram (see Figure 1) and a review of many experts’ opinion on this topic.

AnalyticsWeek and I recently took an empirical approach to understanding the skills of data scientists by asking over 500 data professionals about their job roles and their proficiency across 25 data skills in five areas (i.e., business, technology, programming, math/modeling and statistics). A factor analysis of their proficiency ratings revealed three factors: business acumen, technology/programming skills and statistics/math knowledge.

datascienceblogroleskills
Figure 2. Data professionals in different job roles are proficient in different data skills. Click image to enlarge.

A data scientist who possesses expertise in all data skills is rare. In our survey, none of the respondents were experts in all five skill areas. Instead, our results identified four different types of data scientists, each with varying levels of proficiency in data skills; as expected, different data professionals possessed role-specific skills (see Figure 2). Business Management professionals were the most proficient in business skills. Developers were the most proficient in technology and programming skills. Researchers were most proficient in math/modeling and statistics. Creatives did not possess great proficiency in any one skill.

The Practice of Data Science: Getting Insights from Data

Gil Press offers a great summary of the field of data science. He traces the literary history of the term (term first appears in use in 1974) and settles on the idea that data science is way of extracting insights from those data using the powers of computer science and statistics applied to data from a specific field of study.

CRISP-DM_Process_Diagram[1]
Figure 3. Six Phases of the CRISP-DM (Cross Industry Standard Process for Data Mining) methodology. Download the IBM SPSS Modeler CRISP-DM Guide here.
But how do you get insights from data? Bernard Marr offers his 5-step SMART approach to extract information. SMART stands for:

  • S = Start with Strategy
  • M = Measure Metrics and Data
  • A = Apply Analytics
  • R = Report Results
  • T = Transform your Business

Another approach is the 6-step CRISP-DM (Cross Industry Standard Process for Data Mining) method (see Figure 3). In a KDNuggets Poll in 2014, the CRISP-DM method was the most popular methodology (43%) used by data professionals for analytics, data mining, and data science projects.

These two approaches have a lot in common with each other and both share a lot with a method that has been around for about 1000 years: the scientific method (see Alhazen, a forerunner of the scientific method). The scientific method follows these general steps (see figure 4):

Figure 1. The scientific method is a way to get insights from your data
Figure 4. The scientific method is a way to get insights from your data
  1. Formulate a question or problem statement
  2. Generate a hypothesis that is testable
  3. Gather/Generate data to understand the phenomenon in question. Data can be generated through experimentation; when we can’t conduct true experiments, data are obtained through observations and measurements.
  4. Analyze data to test the hypotheses / Draw conclusions
  5. Communicate results to interested parties or take action (e.g., change processes) based on the conclusions. Additionally, the outcome of the scientific method can help us refine our hypotheses for further testing.

The value of data is measured by what you do with it. Whether you’re investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge, the scientific method is an effective way to systematically interrogate your data. Scientists may differ with respect to the variables they use and the problems they study (e.g., medicine, education and business), but they all use the scientific method to advance bodies of knowledge.

Data is, has been and forever will be at the heart of science. The scientific method necessarily involves the collection of empirical evidence, subject to specific principles of reasoning. That is the practice of science, a way of extracting knowledge from data. Data science is science.

The Democratization of Data Science

Taking a scientific approach to analyzing data is not only valuable to data workers; it is also valuable for people who consume, interpret and make decisions based the analysis of those data. In business, data users need to think critically about sales reports, social media metrics and quarterly reports. Application vendors are marketing their tools and platforms as a way of making everybody a data scientist, enabling end users (i.e., data users) to get advanced statistical and visualization capabilities to find insights (see Prelert’s take on this here, Tableau’s ideas here and Umbel’s call here).

I believe that the democratization of data science is not only a software problem but also an education problem. Companies need to provide their employees training on statistics and statistical concepts. This type of training gives the employees the ability to think critically about the data (e.g., data source, measurement properties and relevance of the metrics). The better the grasp of statistics employees have, the more insight/value/use they will get from the software they use to analyze/visualize that data.

Statistics is the language of data. Like knowledge of your native language helps you maneuver in the world of words, statistics will help you maneuver in the world of data. As the world around us becomes more quantified, statistical skills will become more and more essential in our daily lives. If you want to make sense of our data-intensive world, you will need to understand statistics.

Conclusions and Final Thoughts

Businesses are relying on data professionals with unique skills to make sense of their data. These data professionals apply their skills to improve decision-making in humans or algorithms. Getting from data to insights, data professionals can adopt a systematic approach to optimize the use of their skills. Following are some conclusions about data scientists and the practice of data science.

  • The practice of data science requires three skills: subject matter expertise, computing skills and statistical knowledge.
  • The general term, ‘data scientist,’ is ambiguous. Our research studied four different types of data scientists: Business management, Programmer, Creative and Researcher. Each role possessed different strengths.
  • Science is a way of thinking, a way of testing ideas using data. An effective practice of data science includes the scientific method. I think that the term, ‘data science,’ is redundant. It’s just science. Science requires the use of data, data to help you understand your business and how the world really works.
  • Offer employees training on statistics. Giving people analytics software and expecting them to excel at data science is like giving them a stethescope and expecting them to excel at medicine. The better they understand the language of data, the more value they will get from the analytics software they use.

I’ll leave you with some thoughts on data science I shared with Nick Dimeo at IBM Insight.

I would love to hear your thoughts on data scientists and the practice of data science. What do those terms mean to you?

Originally Posted at: Data Scientists and the Practice of Data Science by bobehayes

Aug 17, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Accuracy  Source

[ AnalyticsWeek BYTES]

>> SAS Pushes Big Data, Analytics for Cybersecurity by analyticsweekpick

>> Four Use Cases for Healthcare Predictive Analytics, Big Data by analyticsweekpick

>> The Question to Ask Before Hiring a Data Scientist by michael-li

Wanna write? Click Here

[ NEWS BYTES]

>>
 Robin Systems’ Container-Based Virtualization Platform for Applications – Virtualization Review Under  Virtualization

>>
 Research delivers insight into the global business analytics and enterprise software market forecast to 2022 – WhaTech Under  Business Analytics

>>
 Creating smart spaces: Five steps to transform your workplace with IoT – TechTarget (blog) Under  IOT

More NEWS ? Click Here

[ FEATURED COURSE]

Deep Learning Prerequisites: The Numpy Stack in Python

image

The Numpy, Scipy, Pandas, and Matplotlib stack: prep for deep learning, machine learning, and artificial intelligence… more

[ FEATURED READ]

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th Edition

image

The eagerly anticipated Fourth Edition of the title that pioneered the comparison of qualitative, quantitative, and mixed methods research design is here! For all three approaches, Creswell includes a preliminary conside… more

[ TIPS & TRICKS OF THE WEEK]

Strong business case could save your project
Like anything in corporate culture, the project is oftentimes about the business, not the technology. With data analysis, the same type of thinking goes. It’s not always about the technicality but about the business implications. Data science project success criteria should include project management success criteria as well. This will ensure smooth adoption, easy buy-ins, room for wins and co-operating stakeholders. So, a good data scientist should also possess some qualities of a good project manager.

[ DATA SCIENCE Q&A]

Q:Why is naive Bayes so bad? How would you improve a spam detection algorithm that uses naive Bayes?
A: Naïve: the features are assumed independent/uncorrelated
Assumption not feasible in many cases
Improvement: decorrelate features (covariance matrix into identity matrix)

Source

[ VIDEO OF THE WEEK]

#HumansOfSTEAM feat. Hussain Gadwal, Mechanical Designer via @STEAMTribe #STEM #STEAM

 #HumansOfSTEAM feat. Hussain Gadwal, Mechanical Designer via @STEAMTribe #STEM #STEAM

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Everybody gets so much information all day long that they lose their common sense. – Gertrude Stein

[ PODCAST OF THE WEEK]

#FutureOfData Podcast: Conversation With Sean Naismith, Enova Decisions

 #FutureOfData Podcast: Conversation With Sean Naismith, Enova Decisions

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In late 2011, IDC Digital Universe published a report indicating that some 1.8 zettabytes of data will be created that year.

Sourced from: Analytics.CLUB #WEB Newsletter

Smart Data Modeling: From Integration to Analytics

There are numerous reasons why smart data modeling, which is predicated on semantic technologies and open standards, is one of the most advantageous means of effecting everything from integration to analytics in data management.

  • Business-Friendly—Smart data models are innately understood by business users. These models describe entities and their relationships to one another in terms that business users are familiar with, which serves to empower this class of users in myriad data-driven applications.
  • Queryable—Semantic data models are able to be queried, which provides a virtually unparalleled means of determining provenance, source integration, and other facets of regulatory compliance.
  • Agile—Ontological models readily evolve to include additional business requirements, data sources, and even other models. Thus, modelers are not responsible for defining all requirements upfront, and can easily modify them at the pace of business demands.

According to Cambridge Semantics Vice President of Financial Services Marty Loughlin, the most frequently used boons of this approach to data modeling is an operational propensity in which, “There are two examples of the power of semantic modeling of data. One is being able to bring the data together to ask questions that you haven’t anticipated. The other is using those models to describe the data in your environment to give you better visibility into things like data provenance.”

Implicit in those advantages is an operational efficacy that pervades most aspects of the data sphere.

Smart Data Modeling
The operational applicability of smart data modeling hinges on its flexibility. Semantic models, also known as ontologies, exist independently of infrastructure, vendor requirements, data structure, or any other characteristic related to IT systems. As such, they can incorporate attributes from all systems or data types in a way that is aligned with business processes or specific use cases. “This is a model that makes sense to a business person,” Loughlin revealed. “It uses terms that they’re familiar with in their daily jobs, and is also how data is represented in the systems.” Even better, semantic models do not necessitate all modeling requirements prior to implementation. “You don’t have to build the final model on day one,” Loughlin mentioned. “You can build a model that’s useful for the application that you’re trying to address, and evolve that model over time.” That evolution can include other facets of conceptual models, industry-specific models (such as FIBO), and aspects of new tools and infrastructure. The combination of smart data modeling’s business-first approach, adaptable nature and relatively rapid implementation speed is greatly contrasted with typically rigid relational approaches.

Smart Data Integration and Governance
Perhaps the most cogent application of smart data modeling is its deployment as a smart layer between any variety of IT systems. By utilizing platforms reliant upon semantic models as a staging layer for existing infrastructure, organizations can simplify data integration while adding value to their existing systems. The key to integration frequently depends on mapping. When mapping from source to target systems, organizations have traditionally relied upon experts from each of those systems to create what Loughlin called “ a source to target document” for transformation, which is given to developers to facilitate ETL. “That process can take many weeks, if not months, to complete,” Loughlin remarked. “The moment you’re done, if you need to make a change to it, it can take several more weeks to cycle through that iteration.”

However, since smart data modeling involves common models for all systems, integration merely includes mapping source and target systems to that common model. “Using common conceptual models to drive existing ETL tools, we can provide high quality, governed data integration,” Loughlin said. The ability of integration platforms based on semantic modeling to automatically generate the code for ETL jobs not only reduces time to action, but also increases data quality while reducing cost. Additional benefits include the relative ease in which systems and infrastructure are added to this process, the tendency for deploying smart models as a catalog for data mart extraction, and the means to avoid vendor lock-in from any particular ETL vendor.

Smart Data Analytics—System of Record
The components of data quality and governance that are facilitated by deploying semantic models as the basis for integration efforts also extend to others that are associated with analytics. Since the underlying smart data models are able to be queried, organizations can readily determine provenance and audit data through all aspects of integration—from source systems to their impact on analytics results. “Because you’ve now modeled your data and captured the mapping in a semantic approach, that model is queryable,” Loughlin said. “We can go in and ask the model where data came from, what it means, and what conservation happened to that data.” Smart data modeling provides a system of record that is superior to many others because of the nature of analytics involved. As Loughlin explained, “You’re bringing the data together from various sources, combining it together in a database using the domain model the way you described your data, and then doing analytics on that combined data set.”

Smart Data Graphs
By leveraging these models on a semantic graph, users are able to reap a host of analytics benefits that they otherwise couldn’t because such graphs are focused on the relationships between nodes. “You can take two entities in your domain and say, ‘find me all the relationships between these two entities’,” Loughlin commented about solutions that leverage smart data modeling in RDF graph environments. Consequently, users are able to determine relationships that they did not know existed. Furthermore, they can ask more questions based on those relationships than they otherwise would be able to ask. The result is richer analytics results based on the overarching context between relationships that is largely attributed to the underlying smart data models. The nature and number of questions asked, as well as the sources incorporated for such queries, is illimitable. “Semantic graph databases, from day one have been concerned with ontologies…descriptions of schema so you can link data together,” explained Franz CEO Jans Aasman. “You have descriptions of the object and also metadata about every property and attribute on the object.”

Modeling Models
When one considers the different facets of modeling that smart data modeling includes—business models, logical models, conceptual models, and many others—it becomes apparent that the true utility in this approach is an intrinsic modeling flexibility upon which other approaches simply can’t improve. “What we’re actually doing is using a model to capture models,” Cambridge Semantics Chief Technology Officer Sean Martin observed. “Anyone who has some form of a model, it’s probably pretty easy for us to capture it and incorporate it into ours.” The standards-based approach of smart data modeling provides the sort of uniform consistency required at an enterprise level, which functions as means to make data integration, data governance, data quality metrics, and analytics inherently smarter.

Source

Aug 10, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Productivity  Source

[ AnalyticsWeek BYTES]

>> Important Strategies to Enhance Big Data Access by thomassujain

>> Predictive Workforce Analytics Studies: Do Development Programs Help Increase Performance Over Time? by groberts

>> How to file a patent by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 NTT Com plans to invest over $160 million for data center expansion in India – ETCIO.com Under  Data Center

>>
 Goergen Institute for Data Science provides new opportunities for … – University of Rochester Newsroom Under  Data Science

>>
 Hints of iPhone 8 Showing Up in Web Analytics – Mac Rumors Under  Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

CS109 Data Science

image

Learning from data in order to gain useful predictions and insights. This course introduces methods for five key facets of an investigation: data wrangling, cleaning, and sampling to get a suitable data set; data managem… more

[ FEATURED READ]

On Intelligence

image

Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one strok… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:What is: lift, KPI, robustness, model fitting, design of experiments, 80/20 rule?
A: Lift:
It’s measure of performance of a targeting model (or a rule) at predicting or classifying cases as having an enhanced response (with respect to the population as a whole), measured against a random choice targeting model. Lift is simply: target response/average response.

Suppose a population has an average response rate of 5% (mailing for instance). A certain model (or rule) has identified a segment with a response rate of 20%, then lift=20/5=4

Typically, the modeler seeks to divide the population into quantiles, and rank the quantiles by lift. He can then consider each quantile, and by weighing the predicted response rate against the cost, he can decide to market that quantile or not.
“if we use the probability scores on customers, we can get 60% of the total responders we’d get mailing randomly by only mailing the top 30% of the scored customers”.

KPI:
– Key performance indicator
– A type of performance measurement
– Examples: 0 defects, 10/10 customer satisfaction
– Relies upon a good understanding of what is important to the organization

More examples:

Marketing & Sales:
– New customers acquisition
– Customer attrition
– Revenue (turnover) generated by segments of the customer population
– Often done with a data management platform

IT operations:
– Mean time between failure
– Mean time to repair

Robustness:
– Statistics with good performance even if the underlying distribution is not normal
– Statistics that are not affected by outliers
– A learning algorithm that can reduce the chance of fitting noise is called robust
– Median is a robust measure of central tendency, while mean is not
– Median absolute deviation is also more robust than the standard deviation

Model fitting:
– How well a statistical model fits a set of observations
– Examples: AIC, R2, Kolmogorov-Smirnov test, Chi 2, deviance (glm)

Design of experiments:
The design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation.
In its simplest form, an experiment aims at predicting the outcome by changing the preconditions, the predictors.
– Selection of the suitable predictors and outcomes
– Delivery of the experiment under statistically optimal conditions
– Randomization
– Blocking: an experiment may be conducted with the same equipment to avoid any unwanted variations in the input
– Replication: performing the same combination run more than once, in order to get an estimate for the amount of random error that could be part of the process
– Interaction: when an experiment has 3 or more variables, the situation in which the interaction of two variables on a third is not additive

80/20 rule:
– Pareto principle
– 80% of the effects come from 20% of the causes
– 80% of your sales come from 20% of your clients
– 80% of a company complaints come from 20% of its customers

Source

[ VIDEO OF THE WEEK]

Surviving Internet of Things

 Surviving Internet of Things

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

He uses statistics as a drunken man uses lamp posts—for support rather than for illumination. – Andrew Lang

[ PODCAST OF THE WEEK]

Using Analytics to build A #BigData #Workforce

 Using Analytics to build A #BigData #Workforce

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

39 percent of marketers say that their data is collected ‘too infrequently or not real-time enough.’

Sourced from: Analytics.CLUB #WEB Newsletter