Jan 17, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data Storage  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Big Data Analytics, Supercomputing Seed Growth in Plant Research by analyticsweekpick

>> August 14, 2017 Health and Biotech analytics news roundup by pstein

>> Refugee migration: Where are people fleeing from and where are they going? by analyticsweek

Wanna write? Click Here

[ NEWS BYTES]

>>
 Prescriptive and Predictive Analytics Market Will Boast Developments in Global Industry by 2018-2025 – Leading Journal (blog) Under  Talent Analytics

>>
 Machine-learning algorithm predicts how cells repair broken DNA – EurekAlert (press release) Under  Machine Learning

>>
 200 jobs in Belfast being created by US cyber security firm – RTE.ie Under  cyber security

More NEWS ? Click Here

[ FEATURED COURSE]

Machine Learning

image

6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending … more

[ FEATURED READ]

Storytelling with Data: A Data Visualization Guide for Business Professionals

image

Storytelling with Data teaches you the fundamentals of data visualization and how to communicate effectively with data. You’ll discover the power of storytelling and the way to make data a pivotal point in your story. Th… more

[ TIPS & TRICKS OF THE WEEK]

Keeping Biases Checked during the last mile of decision making
Today a data driven leader, a data scientist or a data driven expert is always put to test by helping his team solve a problem using his skills and expertise. Believe it or not but a part of that decision tree is derived from the intuition that adds a bias in our judgement that makes the suggestions tainted. Most skilled professionals do understand and handle the biases well, but in few cases, we give into tiny traps and could find ourselves trapped in those biases which impairs the judgement. So, it is important that we keep the intuition bias in check when working on a data problem.

[ DATA SCIENCE Q&A]

Q:Is it better to spend 5 days developing a 90% accurate solution, or 10 days for 100% accuracy? Depends on the context?
A: * “premature optimization is the root of all evils”
* At the beginning: quick-and-dirty model is better
* Optimization later
Other answer:
– Depends on the context
– Is error acceptable? Fraud detection, quality assurance

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Eloy Sasot, News Corp

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Eloy Sasot, News Corp

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Big Data is not the new oil. – Jer Thorp

[ PODCAST OF THE WEEK]

@JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

 @JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Facebook stores, accesses, and analyzes 30+ Petabytes of user generated data.

Sourced from: Analytics.CLUB #WEB Newsletter

Interpreting Single Items from the SUS

SUS itemsThe System Usability Scale has been around for decades and is used by hundreds of organizations globally.

The 10-item SUS questionnaire is a measure of a user’s perception of the usability of a “system.”

A system can be just about anything a human interacts with: software apps (business and consumer), hardware, mobile devices, mobile apps, websites, or voice user interfaces.

The SUS questionnaire is scored by combining the 10 items into a single SUS score ranging from 0 to 100. From its creation, though, John Brooke cautioned against interpreting individual items:

“Note that scores for individual items are not meaningful on their own”~John Brooke.

Brooke’s caution against examining scores for the individual items of the SUS was appropriate at the time. After all, he was publishing a “quick and dirty” questionnaire with analyses based on data from 20 people.

There is a sort of conventional wisdom that multiple items are superior to single items and in fact, single item measures and analysis are often dismissed in peer-reviewed journals.

More items will by definition increase the internal consistency reliability of a questionnaire when measured using Cronbach’s alpha. In fact, you can’t measure internal consistency reliability with only one item. However, other methods measure reliability, including test-retest reliability. Single measures, such as satisfaction, brand attitude, task ease, and likelihood to recommend, also exhibit sufficient test-retest reliability and little if anything may be gained by using multiple items.

SUS Benchmarks

John Brooke didn’t publish any benchmarks or guidance for what makes a “good” SUS score. But because the SUS has been used extensively by other researchers who have published the results, we have been able to derive a database of scores. Table 1 shows SUS grades and percentiles that Jim Lewis and I put together from that database, which itself is an adaptation of work from Bangor and Kortum.

Grade SUS Percentile Range
A+ 84.1-100 96-100
A 80.8-84.0 90-95
A- 78.9-80.7 85-89
B+ 77.2-78.8 80-84
B 74.1 – 77.1 70 – 79
B- 72.6-74.0 65-69
C+ 71.1-72.5 60-64
C 65.0-71.0 41-59
C- 62.7-64.9 35-40
D 51.7-62.6 15-34
F 0-51.6 0-14

Table 1: SUS scores, grades, and percentile ranks.

To use the table, find your raw SUS score in the middle column and then find its corresponding grade in the left column and percentile rank in the right column. For example, a SUS score of 75 is a bit above the global average of 68 and nets a “B” grade. A SUS score below 50 puts it in the “F” grade with a percentile rank among the worst interfaces (worse than 86% or better than only 14%).

Why Develop Item-Level Benchmarks?

While the SUS provides an overall measure of perceived ease and our grading scale provides a way to interpret the raw score, researchers may want to measure and set targets for other more specific experience attributes (e.g. perceptions of findability, complexity, consistency, and confidence). To do so, researchers would need to develop specific items to measure those more specific attributes.

Some attributes, such as findability, do not appear in the 10 SUS items. Other attributes, such as perceived complexity (Item 2), perceived ease of use (Item 3), perceived consistency (Item 6), perceived learnability (Item 7), and confidence in use (Item 9) do appear in the SUS.

Researchers who use the SUS and who also need to assess any of these specific attributes would need to decide whether to ask participants in their studies to rate this attribute twice (once in the SUS and again using a separate item) or to use the response to the SUS item in two ways (contributing to the overall SUS score and as a measure of the specific attribute of interest). The latter, using the response to the SUS item in two ways, is the more efficient approach.

In short, using item benchmarks saves respondents time as they answer fewer items and saves researchers time as they don’t have to derive new items and get the bonus of having benchmarks to make the responses more meaningful.

Developing SUS Item Level Benchmarks

To help make the process of understanding individual SUS items better, Jim Lewis and I compiled data from 166 unpublished industrial usability studies/surveys based on scores from 11,855 individual SUS questionnaires.

We then used regression equations to predict overall SUS scores from the individual items. We found each item explained between 35% and 89% of the full SUS score (a large percentage for a single item). Full details of the regression equations and process are available in the Journal of Usability Studies article.

To make item benchmarks easy to reference, we computed the score you’d need for an average “C” score of 68 or a good score of 80, an “A-.“ Why 80? We’ve found that a SUS of 80 has become a common industrial goal. It’s also a good psychological threshold that’s attainable. Achieving a raw SUS score of 90 sounds better but is extraordinarily difficult (only one study in the database exceeded 90–data from Netflix).

Table 2 shows the mean score you would need for each item to achieve an average “C” or good “A-“ score.

SUS Item Target for Average Score Target for Good Score
1. I think that I would like to use this system frequently. ≥ 3.39 ≥ 3.80
2. I found the system unnecessarily complex. ≤ 2.44 ≤ 1.85
3. I thought the system was easy to use. ≥ 3.67 ≥ 4.24
4. I think that I would need the support of a technical person to be able to use this system. ≤ 1.85 ≤ 1.51
5. I found the various functions in this system were well integrated. ≥ 3.55 ≥ 3.96
I thought there was too much inconsistency in this system. ≤ 2.20 ≤ 1.77
7. I would imagine that most people would learn to use this system very quickly. ≥ 3.71 ≥ 4.19
8. I found the system very cumbersome to use. ≤ 2.25 ≤ 1.66
9. I felt very confident using the system. ≥ 3.72 ≥ 4.25
10. I needed to learn a lot of things before I could get going with this system. ≤ 2.09 ≤ 1.64

Table 2: Benchmarks for average and good scores for the 10 SUS items.

For example, if you’re using Item 3, “I thought the system was easy to use,” then a mean score of 3.67 would correspond to a SUS score of 68 (an average overall system score). For an above average SUS score of 80, the corresponding target for Item 3 would be a mean score of at least 4.24.

Note that due to the mixed tone of the SUS, the directionality of the item targets is different for odd- and even-numbered items. Specifically, for odd-numbered items, means need to be greater than the targets; for even-numbered items, observed means need to be less than the targets. For example, for Item 2, “I found the system unnecessarily complex,” you would want to have a mean below 2.44 to achieve an average score (SUS equivalent of 68) and below 1.85 for a good score (SUS equivalent of 80).

Summary

The popularity of the SUS has allowed for the creation of normalized databases and guidance on what constitutes poor, good, or excellent scores. Researchers on some occasions may want to use single items from the SUS to benchmark more specific constructs (e.g. “I felt very confident using the system” representing user confidence). Using data from almost 12,000 participants we were able to create benchmarks for individual SUS items to achieve average “C” scores and high “A-“ SUS equivalent scores. These benchmarks allow researchers to know what mean value to aim for to achieve an average or good experience when interpreting single items from the SUS.

Originally Posted at: Interpreting Single Items from the SUS

Borrowing Technology from Media & Entertainment for Big Data Analytics in the Cloud

For most of computing’s history, data meant “structured” data or data that fits neatly into pre-defined categories and rows stored in databases or spreadsheets. But the big data movement has changed all of that with the proliferation of unstructured data analysis. Unstructured data is any data that doesn’t fit into a predefined data model. It includes things like video, images, text, and all the data being logged by sensors and the myriad of digital devices. Where structured data is relatively easy to store and analyze using traditional technology, unstructured data isn’t.

None-the-less, today, massive collections of unstructured data are being analyzed for altruistic purposes like combating crime and preventing disease, but also for profit motivated goals like spotting business trends. And, as we’ve entered an era of pervasive surveillance – including aerial surveillance by drones and low earth orbit satellites capable of delivering 50 cm resolution imagery – media content (photos, videos and audio) are more relevant to big data analytics than ever before.

Unstructured data tends to be vastly larger than structured data, and is mostly responsible for our crossing the threshold from regular old data to “big data.” That threshold is not defined by a specific number of terabytes or even petabytes, but by what happens when data accumulates to an amount so large that innovative techniques are required to store, analyze and move it. Public cloud computing technology is one of these innovations that’s being applied to big data analytics because it offers a virtually unlimited elastic supply of compute power, networking and storage with a pay-for-use pricing model (all of which opens up new possibilities for analyzing both unstructured and structured big data).

Before their recent and unfortunate shutdown, the respected tech news and research site GigaOM released a survey on enterprise big data. In it over 90% of participants said they planned to move more than a terabyte of data into the cloud, and 20% planned to move more than 100 TB. Cloud storage is a compelling solution as both an elastic repository for this overflowing data and a location readily accessible to cloud-based analysis.

However, one of the challenges that come with using public cloud computing and cloud storage is getting the data into the cloud in the first place. Moving large files and bulk data sets over the Internet can be very inefficient with traditional protocols like FTP and HTTP (the most common way organizations move large files, and the foundation for most options cloud storage providers offer to get your data to them besides shipping hard drives).

In that same GigaOm survey, 24% expressed concern about whether their available bandwidth can accommodate pushing their large data volumes up to the cloud, and 21% worry that they don’t have the expertise to carry out the data migration (read about all the options for moving data to any of the major cloud storage providers, and you too might be intimidated).

While bandwidth and expertise are very legitimate concerns, there are SaaS (Software as a Service) large file transfer solutions that can make optimal use of bandwidth, are very easy to use and integrate with Amazon S3, Microsoft Azure and Google Cloud. In fact, the foundation technology of these solutions was originally built to move very large media files throughout the production, post production and distribution of film and television.

Back in the early 2000’s, when the Media & Entertainment industry began actively transitioning from physical media including tape and hard drives to digital file-based workflows, they had a big data movement problem too. For companies like Disney and the BBC, sending digital media between their internal locations and external editing or broadcast partners was a serious issue. Compared to everything else moving over the Internet, those files were huge. (And broadcast masters are relatively small compared to the 4K raw camera footage being captured today. For example, an hour of raw camera footage often requires a terabyte or more of storage.)

During M&E’s transition from physical media to file-based media, companies like Signiant started developing new protocols for the fast transfer of large files over public and private IP networks, with the high security that the movie industry requires for their most precious assets. The National Academy of Television Arts and Sciences even recognized Signiant’s pioneering role with a Technology and Engineering Emmy award in 2014.

Today, that technology has evolved in step with the cloud revolution, and SaaS accelerated large file transfer technology is expanding to other industries. Far faster and more reliable than older technologies like FTP and HTTP, this solution can also be delivered as a service, so users do not have to worry about provisioning hardware and software infrastructure, including scaling and balancing servers for load peaks and valleys. The “expertise” many worry about needing is a non-issue because the solution is so simple to use. And it’s being used in particular to push large volumes to cloud storage for all kinds of time-sensitive projects, including big data analytics. For example, scientists are analyzing images of snow and ice cover to learn more about climate change, and (interesting though less benevolent) businesses are analyzing images of competitors’ parking lots — counting cars by make and model — in order to understand the shopping habits and demographics of their customers.

It’s always fascinating to see how innovation occurs. It almost never springs from nothing, but is adapted from techniques and technologies employed somewhere else to solve a different challenge. Who would have thought, at the turn of the century, that the technology developed for Media & Entertainment would be so relevant to big data scientific, government and business analytics? And that technology used to produce and delivery entertainment could be leveraged for the betterment of society?

Originally posted via “Borrowing Technology from Media & Entertainment for Big Data Analytics in the Cloud”

Originally Posted at: Borrowing Technology from Media & Entertainment for Big Data Analytics in the Cloud

Jan 10, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
SQL Database  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Improving the Customer Experience Through Big Data [VIDEO] by bobehayes

>> Accelerating Discovery with a Unified Analytics Platform for Genomics by analyticsweek

>> The UX of Brokerage Websites by analyticsweek

Wanna write? Click Here

[ NEWS BYTES]

>>
 Italy-America Chamber, Luxury Marketing Council host 2nd Annual Luxury Summit – Luxury Daily Under  Social Analytics

>>
 Top five business analytics intelligence trends for 2019 – Information Age Under  Analytics

>>
 Billions of dollars have not helped Indian e-tailers figure out AI and big data – Quartz Under  Big Data Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Artificial Intelligence

image

This course includes interactive demonstrations which are intended to stimulate interest and to help students gain intuition about how artificial intelligence methods work under a variety of circumstances…. more

[ FEATURED READ]

The Industries of the Future

image

The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:Why is naive Bayes so bad? How would you improve a spam detection algorithm that uses naive Bayes?
A: Naïve: the features are assumed independent/uncorrelated
Assumption not feasible in many cases
Improvement: decorrelate features (covariance matrix into identity matrix)

Source

[ VIDEO OF THE WEEK]

@JustinBorgman on Running a data science startup, one decision at a time #Futureofdata #Podcast

 @JustinBorgman on Running a data science startup, one decision at a time #Futureofdata #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data beats emotions. – Sean Rad, founder of Ad.ly

[ PODCAST OF THE WEEK]

@JustinBorgman on Running a data science startup, one decision at a time #Futureofdata #Podcast

 @JustinBorgman on Running a data science startup, one decision at a time #Futureofdata #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

73% of organizations have already invested or plan to invest in big data by 2016

Sourced from: Analytics.CLUB #WEB Newsletter

Customer Loyalty Feedback Meets Customer Relationship Management

clicktoolsIn my new book, Total Customer Experience, I illustrate why three types of customer loyalty are needed to understand the different ways your customers can show their loyalty towards your company or brand. The three types of loyalty are:

  1. Retention Loyalty: likelihood of customers to stay with a company
  2. Advocacy Loyalty: likelihood of customers to recommend the company/ advocate on the company’s behalf
  3. Purchasing Loyalty: likelihood of customers to expand their relationship with the company

Using this multi-faceted model, I developed a loyalty measurement approach, referred to as the RAPID Loyalty Approach, to help companies get a more comprehensive picture of customer loyalty. Understanding the factors that impact these different types of loyalty helps companies target customer experience improvement strategies to increase different types of customer loyalty.

Data Integration

When companies are able to link these RAPID loyalty metrics with other customer information, like purchase history, campaign responses and employee/partner feedback, the customer insights become deeper. TCELab  (where I am the Chief Customer Officer) is working with Clicktools to help Salesforce customers implement the RAPID Loyalty Approach. This partnership brings together TCELab’s survey knowledge and advisory services with Clicktools’ exceptional feedback software and Salesforce integration; for the fifth consecutive year, Clicktools has received the Salesforce AppExchange™ Customer Choice Award for Best Survey App.

TCELab will include RAPID surveys in Clicktools’ survey library, available in all Clicktools editions and  integrated easilywith a RAPID Salesforce.com custom object.  Salesforce reports and dashboards, including linkage analysis will follow.  Customers can call on the expertise of TCELab for advice on tailoring the surveys for their organization and for support in analysis and reporting.

Joint Whitepaper from TCELab and Clicktools

David Jackson, founder and CEO of Clicktools, and I have co-written a whitepaper titled, “RAPID Loyalty: A Comprehensive Approach to Customer Loyalty,” to present the basic structure and benefits of the RAPID approach and to offer Clicktools customers access to a special program for getting started.

Download the Whitepaper >>

Originally Posted at: Customer Loyalty Feedback Meets Customer Relationship Management by bobehayes

Emergence of #DataOps Age – @AndyHPalmer #FutureOfData #Podcast

[youtube https://www.youtube.com/watch?v=ER9mHaWMMww]

Emergence of #DataOps Age – @AndyHPalmer #FutureOfData

Youtube: https://youtu.be/ER9mHaWMMww
iTunes: http://math.im/itunes

In this podcast @AndyPalmer from @Tamr sat with @Vishaltx from @AnalyticsWeek to talk about the emergence / need / market for Data Ops, a specialized capability emerging from merging data engineering and dev ops ecosystem due to increased convoluted data silos and complicated processes. Andy shared his journey on what some of the businesses and its leaders are doing wrong and how businesses needs to rethink their data silos to future proof themselves. This is a good podcast for any data leader thinking about cracking the code on getting high quality insights from data.

Andy’s Recommended Read:
Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker https://amzn.to/2Lc6WqK
The Three-Body Problem by Cixin Liu and Ken Liu https://amzn.to/2rQyPvp

Andy’s BIO:
Andy Palmer is a serial entrepreneur who specializes in accelerating the growth of mission-driven startups. Andy has helped found and/or fund more than 50 innovative companies in technology, health care and the life sciences. Andy’s unique blend of strategic perspective and disciplined tactical execution is suited to environments where uncertainty is the rule rather than the exception. Andy has a specific passion for projects at the intersection of computer science and the life sciences.

Most recently, Andy co-founded Tamr, a next generation data curation company and Koa Labs, a start-up club in the heart of Harvard Square, Cambridge, MA.

Specialties: Software, Sales & Marketing, Web Services, Service Oriented Architecture, Drug Discovery, Database, Data Warehouse, Analytics, Startup, Entrepreneurship, Informatics, Enterprise Software, OLTP, Science, Internet, ecommerce, Venture Capital, Bootstrapping, Founding Team, Venture Capital firm, Software companies, early stage venture, corporate development, venture-backed, venture capital fund, world-class, stage venture capital

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest and email at info@analyticsweek.com

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Originally Posted at: Emergence of #DataOps Age – @AndyHPalmer #FutureOfData #Podcast by v1shal

Jan 03, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Conditional Risk  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Looking out for Big Data Capital of the World by v1shal

>> It’s Official! Talend to Welcome Stitch to the Family! by analyticsweekpick

>> Data Management Rules for Analytics by analyticsweek

Wanna write? Click Here

[ NEWS BYTES]

>>
 Startups aspiring to market like big brands: with Smartech & AI, today they can – YourStory.com Under  Prescriptive Analytics

>>
 Ecolab Inc (NYSE:ECL) Institutional Investor Sentiment Analysis – The Cardinal Weekly (press release) Under  Sentiment Analysis

>>
 Data center outsourcing faces a legal test – DatacenterDynamics Under  Data Center

More NEWS ? Click Here

[ FEATURED COURSE]

Python for Beginners with Examples

image

A practical Python course for beginners with examples and exercises…. more

[ FEATURED READ]

The Black Swan: The Impact of the Highly Improbable

image

A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences. In this groundbreaking and prophetic book, Taleb shows in a playful way that Black Swan events explain almost eve… more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:How do you test whether a new credit risk scoring model works?
A: * Test on a holdout set
* Kolmogorov-Smirnov test

Kolmogorov-Smirnov test:
– Non-parametric test
– Compare a sample with a reference probability distribution or compare two samples
– Quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution
– Or between the empirical distribution functions of two samples
– Null hypothesis (two-samples test): samples are drawn from the same distribution
– Can be modified as a goodness of fit test
– In our case: cumulative percentages of good, cumulative percentages of bad

Source

[ VIDEO OF THE WEEK]

Data-As-A-Service (#DAAS) to enable compliance reporting

 Data-As-A-Service (#DAAS) to enable compliance reporting

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

You can have data without information, but you cannot have information without data. – Daniel Keys Moran

[ PODCAST OF THE WEEK]

Solving #FutureOfOrgs with #Detonate mindset (by @steven_goldbach & @geofftuff) #FutureOfData #Podcast

 Solving #FutureOfOrgs with #Detonate mindset (by @steven_goldbach & @geofftuff) #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

2.7 Zetabytes of data exist in the digital universe today.

Sourced from: Analytics.CLUB #WEB Newsletter

Discussing the World of Crypto with @JoelComm / @BadCrypto

[youtube https://youtu.be/xJucEIDitas]

Discussing the World of Crypto with @JoelComm / @BadCrypto #FutureOfData

Youtube: https://youtu.be/xJucEIDitas
iTunes: http://apple.co/2ynxopz

In this podcast Joel Comm from The Bad Crypto Podcast sat with Vishal Kumar, CEO AnalyticsWeek and discuss the World of Crypto Currencies. The discussion sheds light into the nuances in the rapidly exploding world of Crypto Currencies, some of the thinking behind the currencies. The discussion also sheds light into the opportunities and risks in the industry. Joel sheds his insights about how to think about theses currencies and long term implications of the algos that run these currencies. The podcast is a great listen for anyone who wants to understand the world of crypto currencies.

*please note, this podcast and / or its content in no ways advocate any investment advice and nor intended to generate any positive or negative influence. Crypto Currencies are highly volatile in nature and any investor must use absolute caution and care while evaluating such currencies.*

Joel’s Recommended Read:
Cryptocurrencies 101 By James Altucher http://bit.ly/2Bi5FMv

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Joel’s BIO:
As a knowledgable & inspirational speaker, Joel speaks on a variety of business and entrepreneurial topics. He presents a step-by-step playbook on how to use social media as a leveraging tool to expand the reach of your brand, increase your customer base, and create fierce brand loyalty for your business. Joel is also able to speak with authority on the various ways to harness the marketing power of technology to explode profits. He offers an inspiring yet down-to-earth call to action for those who dream of obtaining growth and financial success. As someone who went from having only 87 cents in his bank account to creating multiple successful businesses, Joel is uniquely poised to instruct and inspire when it comes to using the various forms of new media as avenues towards the greater goal of business success. He is a broadcast veteran with thousands of hours in radio, podcasting, television and online video experience. Joel is the host of two popular, yet completely different podcasts. FUN with Joel Comm features the lighter side of top business and social leaders. The Bad Crypto Podcast makes cryptocurrency and bitcoin understandable to the masses.

Joel is the New York Times best-selling author of 14 books, including The AdSense Code, Click Here to Order: Stories from the World’s Most Successful Entrepreneurs, KaChing: How to Run an Online Business that Pays and Paysm Twitter Power 3.0 and Self Employed: 50 Signs That You Might Be an Entrepreneur. He has also written over 40 ebooks. He has appeared in The New York Times, on Jon Stewart’s The Daily Show, on CNN online, on Fox News, and many other places.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Source

Nick Howe (@Area9Nick) talks about fabric of learning organization to bring #JobsOfFuture #podcast

[youtube https://www.youtube.com/watch?v=-1ZP_tbZFgI]

In this podcast Nick Howe (@NickJHowe) from @Area9Learning talks about the transforming world of learning landscape. He shed light on some of the learning challenges and some of the ways learning could match the evolving world and its learning needs. Nick sheds light on some tactical steps that businesses could adopt to create world class learning organization. This podcast is must for learning organization.

Nick’s Recommended Read:
The End of Average: Unlocking Our Potential by Embracing What Makes Us Different by Todd Rose https://amzn.to/2kiahYN
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom https://amzn.to/2IAPURg

Podcast Link:
iTunes: http://math.im/jofitunes
GooglePlay: http://math.im/jofgplay

Nick’s BIO:
Nick Howe is an award winning Chief Learning Officer and business leader with a focus on the application of innovative education technologies. He is the Chief Learning Officer at Area9 Lyceum – one of global leaders in adaptive learning technology, a Strategic Advisor to the Institute of Simulation and Training at the University of Central Florida, and board advisor to multiple EdTech startups.

For twelve years Nick was the Chief Learning Officer at Hitachi Data Systems where he built and led the corporate university and online communities serving over 50,000 employees, resellers and customers.

With over 25 years’ global sales, sales enablement, delivery and consulting experience with Hitachi, EDS Corporation and Bechtel Inc., Nick is passionate about the transformation of customer experiences, partner relationships and employee performance through learning and collaboration

About #Podcast:
#JobsOfFuture podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#JobsOfFuture #Leadership #Podcast #Future of #Work #Worker & #Workplace

Originally Posted at: Nick Howe (@Area9Nick) talks about fabric of learning organization to bring #JobsOfFuture #podcast

Dec 27, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data shortage  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Large Visualizations in canvasXpress by analyticsweek

>> How to pick the right sample for your analysis by jburchell

>> How Google Understands You [Infographic] by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Meet data center compliance standards in hybrid deployments – TechTarget Under  Data Center

>>
 Approaching The Hybrid Cloud Computing Model For Modern Government – Forbes Under  Cloud

>>
 Financial Analytics Market 2018 Report with Manufacturers, Dealers, Consumers, Revenue, Regions, Types, Application – The Iowa DeltaChi Under  Financial Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Introduction to Apache Spark

image

Learn the fundamentals and architecture of Apache Spark, the leading cluster-computing framework among professionals…. more

[ FEATURED READ]

Rise of the Robots: Technology and the Threat of a Jobless Future

image

What are the jobs of the future? How many will there be? And who will have them? As technology continues to accelerate and machines begin taking care of themselves, fewer people will be necessary. Artificial intelligence… more

[ TIPS & TRICKS OF THE WEEK]

Keeping Biases Checked during the last mile of decision making
Today a data driven leader, a data scientist or a data driven expert is always put to test by helping his team solve a problem using his skills and expertise. Believe it or not but a part of that decision tree is derived from the intuition that adds a bias in our judgement that makes the suggestions tainted. Most skilled professionals do understand and handle the biases well, but in few cases, we give into tiny traps and could find ourselves trapped in those biases which impairs the judgement. So, it is important that we keep the intuition bias in check when working on a data problem.

[ DATA SCIENCE Q&A]

Q:How do you assess the statistical significance of an insight?
A: * is this insight just observed by chance or is it a real insight?
Statistical significance can be accessed using hypothesis testing:
– Stating a null hypothesis which is usually the opposite of what we wish to test (classifiers A and B perform equivalently, Treatment A is equal of treatment B)
– Then, we choose a suitable statistical test and statistics used to reject the null hypothesis
– Also, we choose a critical region for the statistics to lie in that is extreme enough for the null hypothesis to be rejected (p-value)
– We calculate the observed test statistics from the data and check whether it lies in the critical region

Common tests:
– One sample Z test
– Two-sample Z test
– One sample t-test
– paired t-test
– Two sample pooled equal variances t-test
– Two sample unpooled unequal variances t-test and unequal sample sizes (Welch’s t-test)
– Chi-squared test for variances
– Chi-squared test for goodness of fit
– Anova (for instance: are the two regression models equals? F-test)
– Regression F-test (i.e: is at least one of the predictor useful in predicting the response?)

Source

[ VIDEO OF THE WEEK]

#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

 #FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

War is 90% information. – Napoleon Bonaparte

[ PODCAST OF THE WEEK]

Scott Harrison (@SRHarrisonJD) on leading the learning organization #JobsOfFuture #Podcast

 Scott Harrison (@SRHarrisonJD) on leading the learning organization #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Estimates suggest that by better integrating big data, healthcare could save as much as $300 billion a year — that’s equal to reducing costs by $1000 a year for every man, woman, and child.

Sourced from: Analytics.CLUB #WEB Newsletter