May 24, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data Accuracy  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> What if the data tells you something you don’t like?” Three potential big data pitfalls by analyticsweekpick

>> 5 tips to becoming a big data superhero by analyticsweekpick

>> Marketing Analytics – Success Through Analysis by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Cyxtera Launches On-Demand HCI Compute, Connectivity Services – Data Center Knowledge Under  Data Center

>>
 CNCF Boosts Cloud Security Capabilities With SPIFFE, OPA Projects – eWeek Under  Cloud Security

>>
 Network security in the age of the internet of things – ComputerWeekly.com Under  Internet Of Things

More NEWS ? Click Here

[ FEATURED COURSE]

Statistical Thinking and Data Analysis

image

This course is an introduction to statistical data analysis. Topics are chosen from applied probability, sampling, estimation, hypothesis testing, linear regression, analysis of variance, categorical data analysis, and n… more

[ FEATURED READ]

Rise of the Robots: Technology and the Threat of a Jobless Future

image

What are the jobs of the future? How many will there be? And who will have them? As technology continues to accelerate and machines begin taking care of themselves, fewer people will be necessary. Artificial intelligence… more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:Name a few famous API’s (for instance GoogleSearch)
A: Google API (Google Analytics, Picasa), Twitter API (interact with Twitter functions), GitHub API, LinkedIn API (users data)…
Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

 #BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

If we have data, let’s look at data. If all we have are opinions, let’s go with mine. – Jim Barksdale

[ PODCAST OF THE WEEK]

#FutureOfData with @theClaymethod, @TiVo discussing running analytics in media industry

 #FutureOfData with @theClaymethod, @TiVo discussing running analytics in media industry

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Poor data can cost businesses 20%–35% of their operating revenue.

Sourced from: Analytics.CLUB #WEB Newsletter

NSA to crunch big data in AWS C2S

WASHINGTON, D.C. – The National Security Agency is moving some of its IT operations to Amazon’s cloud.

The National Security Agency (NSA) was represented by Alex Voultepsis, chief of the engineering and planning process for the NSA’s Intelligence Community Special Operations Group, at a session during the AWS Public Sector Symposium here this week. Voultepsis said during a panel discussion the agency plans to migrate some of its infrastructure to Amazon Web Services (AWS).

Voultepsis’s unit within the NSA will use Commercial Cloud Services (C2S), the Amazon cloud region established by the Central Intelligence Agency for classified data, which is open to all 17 federal intelligence agencies, according to Amazon officials interviewed after the panel session.

“The capabilities are there to meet our specialized needs for confidentiality, integrity and availability [of data],” Voultepsis said. “We can shift our focus from commodity things to mission-focused customer-facing things.”

The NSA as a whole also operates a private cloud called GovCloud, but for Voultepsis’s unit, C2S offers better value.

“The infrastructure as a service which Amazon provides has shown us significant IT efficiencies,” Voultepsis said, estimating that the savings on infrastructure costs alone will be between 50-55%.

While it was unclear how much of the NSA’s data center had moved already, Voultepsis said the ultimate goal is to be ‘all-in’ and close private data centers.

“It’s a seismic shift in the way we do business,” he said. “We’ve moved away from the concept of putting our big toe in the water with hybrid cloud, because from an efficiencies perspective, if you don’t go all-in and turn off your old [assets], you never gain the efficiencies…of intelligence integration and an enhanced security posture.”

Photo courtesy of Wikipedia
Photo courtesy of Wikipedia

Asked how he imagined the deployment looking in three to five years, Voultepsis said the agency is most looking forward to analyzing big data in AWS.

“The big data concept will come to fruition more broadly than it has…being able to ask questions…that you couldn’t ask in the past,” he said. “Federated, old-style dogpile type searches go away, and you’re asking complex questions against a broad corpus of data  — complex questions that you couldn’t even dream of asking in the past.”

Other federal agencies in the intelligence community have also moved to C2S for new projects. The National Geospatial-Intelligence Agency (NGA) was also represented on the panel by Jason Hess, cloud security manager for the office of the chief information officer.

“We’ve embraced the director of national intelligence’s…vision of providing intelligence integration,” Hess said. “We cannot continue to operate in the silo mentality of each agency not talking to each other…we’re leveraging this initiative to start working together.”

Hess and Kristine A Guisewite, information system security engineer from Raytheon working for the National Reconnaissance Office (NRO), agreed using C2S makes it easier for them to work together. It’s also easier for developers to do research on the platform with the wealth of knowledge available online, then execute specific projects inside the agencies. Having a consistent operating system image deployed to an entire agency also improves security over having to maintain different versions, Guisewite said.

However, there are still some issues with moving to the cloud. Auto Scaling, for example, has been difficult for the NRO to take advantage of, because of security concerns with machines spinning up and down, the security of external interfaces that require an opening in the firewall, and resources which aren’t always operating from the same IP address, according to Guisewite.

Unlike the NSA, the NGA is not ‘all-in,’ according to Hess. The NGA built a new building and state-of-the-art data center just three years ago that many in the agency are loath to abandon.

“It’s a coalition of the willing right now,” Hess said. “We’re in the bottom of the first [inning] in our cloud migration.”

To read the original article on SearchAWS, click here.

Originally Posted at: NSA to crunch big data in AWS C2S by analyticsweekpick

Healthcare Analytics Tips for Business Minded Doctors

healthcare-analytics-carecloud

The promise of data is huge – enormous clinical and financial rewards, less work, quantifying patient health habits and some form of IBM’s revered Watson supercomputer in every practice.

The 2012 U.S. Hospital Health Data Analytics Market report revealed that 50% of U.S. hospitals are expected to have implemented health data analytics tools by 2016, which represents an annual compound growth rate of 37.9%. In an ever-changing healthcare sector, it’s not wise for the private practice to be left behind.

Still, the industry faces serious technical and strategic challenges. Health data is diverse, complicated and unstructured across a range of criteria, making it very difficult for, say, a small practice doctor to penetrate – especially if he/she has limited experience operating in the tech sphere.

Below are some healthcare analytics tips for more business-minded physicians, no prior experience required.

Choose the right system for reporting. If you’re going to use analytics to better organize your business, don’t choose an inefficient one that’ll further complicate matters. A recent KLAS report titled Business Intelligence: Making Cents of Performance outlines some helpful features that providers searching for (or currently using) analytics tools should keep in mind. These include:

  • Quick implementation
  • Easy-to-use interface
  • Customizable to suit unique organizational needs
  • Ability to develop personalized dashboards for users
  • Flexibility to accommodate other parties like pharmacies, health plans, government entities, financial institutions, etc.

Integrate analytics with training. Practices should teach analytics to new hires, from staff members to doctors. This will help every member of the office understand how analytics data helps both patients and the execution of his/her daily tasks, to the point where seeing through data goggles becomes second nature.

Use dashboards for doctors at your practice to visualize data. As analytics move platforms closer to real-time processing and reporting – at the point of care, even – your practice should focus on updating processes and developing capabilities to enable analytics tool use, namely on the topic of real-time clinical decision support.

Use Google Analytics for online marketing efforts. Make sure your Google configuration is up to date and set up metrics goals for your website, e.g., conversion on opt-in forms, and engagement in the form of time on site and page depth.

Spot barriers to analytics adoption. According to an IBM study, titled “The Value of Analytics in Healthcare,” many healthcare executives have a difficult time differentiating bad and good data.

If this is not the case, other common barriers include lack of data-driven culture, lack of connecting the power of analytics to business improvement tactics, lack of management bandwidth or a perception that costs outweigh benefits.

Adopt an EHR that provides business data. Some people are huge fans of 2-in-1 shampoo/conditioners combos. While this is a more a serious investment, purchasing an EHR system with a built-in analytics platform may be the right choice for practices that don’t have the time or budget to seek solo solutions.

Humanize the data. While this may seem obvious, making the data accessible and friendly to humans – who will, after all, be employing, analyzing and making full use of it – is essential. In this case, usability may be the most crucial element of an effective analytics system because you and your staff simply will not use a tool that makes their jobs more difficult.

Note: This article originally appeared in CareCloud. Click for link here.

Source by analyticsweekpick

May 17, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data security  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> What Beginners Need to Know about Instagram Analytics to Plan Their Strategies Better! by thomassujain

>> The future of marketing automation depends on data analytics at scale by analyticsweekpick

>> Data Driven Innovation: A Primer by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Susquehanna Bancshares (SUSQ) Receives News Sentiment Score of 0.17 – The Ledger Gazette Under  Sentiment Analysis

>>
 Cloud Is Difficult, HPE OneSphere Tackles ‘Accidental’ Hybrid … – Forbes Under  Cloud

>>
 Why Streaming Analytics is Critical to Real-time and Transformation – RTInsights (press release) (blog) Under  Streaming Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Artificial Intelligence

image

This course includes interactive demonstrations which are intended to stimulate interest and to help students gain intuition about how artificial intelligence methods work under a variety of circumstances…. more

[ FEATURED READ]

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th Edition

image

The eagerly anticipated Fourth Edition of the title that pioneered the comparison of qualitative, quantitative, and mixed methods research design is here! For all three approaches, Creswell includes a preliminary conside… more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Give examples of data that does not have a Gaussian distribution, nor log-normal?
A: * Allocation of wealth among individuals
* Values of oil reserves among oil fields (many small ones, a small number of large ones)

Source

[ VIDEO OF THE WEEK]

Understanding #Customer Buying Journey with #BigData

 Understanding #Customer Buying Journey with #BigData

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world. – Atul Butte, Stanford

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MichOConnell, @Tibco

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MichOConnell, @Tibco

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Every second we create new data. For example, we perform 40,000 search queries every second (on Google alone), which makes it 3.5 searches per day and 1.2 trillion searches per year.In Aug 2015, over 1 billion people used Facebook FB +0.54% in a single day.

Sourced from: Analytics.CLUB #WEB Newsletter

How to Win Business using Marketing Data [infographics]

How to Win Business using Marketing Data
How to Win Business using Marketing Data

A marketer’s job is to win the hearts and minds of customers and prospects. Even though priority is to clearly accounts for the intricacies of customer’s intellects and emotions, often major of it is in using intellectual triggers and minor of it is in connecting with them emotionally. It has consistently been overlooked. The fact that, when wielded correctly, emotion is a much more potent persuasive force in forging connections than intellect. Following infographic explains how various channels are being utilized and how they are acting to make the business successful.

Source

May 10, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data security  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Social Media and the Future of Customer Support [Infographics] by v1shal

>> Customer Churn or Retention? A Must Watch Customer Experience Tutorial by v1shal

>> Nov 23, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

Wanna write? Click Here

[ NEWS BYTES]

>>
 Social Media Use in 2018 – Pew Research Center’s Internet and American Life Project Under  Statistics

>>
 Pentagon Releases Second Draft RFP For Multibillion Dollar JEDI Cloud – Nextgov Under  Cloud

>>
 At least 432 UK businesses to be affected by NIS cyber-security regulation – SC Magazine UK Under  cyber security

More NEWS ? Click Here

[ FEATURED COURSE]

Probability & Statistics

image

This course introduces students to the basic concepts and logic of statistical reasoning and gives the students introductory-level practical ability to choose, generate, and properly interpret appropriate descriptive and… more

[ FEATURED READ]

Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking

image

Written by renowned data science experts Foster Provost and Tom Fawcett, Data Science for Business introduces the fundamental principles of data science, and walks you through the “data-analytic thinking” necessary for e… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:What are feature vectors?
A: * n-dimensional vector of numerical features that represent some object
* term occurrences frequencies, pixels of an image etc.
* Feature space: vector space associated with these vectors

Source

[ VIDEO OF THE WEEK]

Understanding #BigData #BigOpportunity in Big HR by @MarcRind #FutureOfData #Podcast

 Understanding #BigData #BigOpportunity in Big HR by @MarcRind #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

In God we trust. All others must bring data. – W. Edwards Deming

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

And one of my favourite facts: At the moment less than 0.5% of all data is ever analysed and used, just imagine the potential here.

Sourced from: Analytics.CLUB #WEB Newsletter

A Super Scary Halloween Story for Data Scientists and other Change Agents

AAEAAQAAAAAAAAMxAAAAJGZkMjE2NDI1LTk5MzItNGE1NC1hZTUyLTNmZjQ5ZjMzNTBlOA

As the mist swirled and parted, the Seer suddenly appeared before me. He was barely recognizable as human. His skin was deeply etched from a hundred winters spent roaming the barren peaks and crags of Mount Olympus. He regarded me with his one good eye, and croaked words that chilled the air around us… “Hey, wassup? What can I do you for? And make it snappy, ’cause I’ve got baklava in the oven. You know how easy it is to burn baklava?”

“Oh, Seer, I’ve come a long way to ask you something that’s been troubling my soul for a very long time. I’ve been involved in many projects where a centralized Data Science team tries to help internal customers from a business unit leverage analytics in some way. I’ve seen mixed results. What’s the secret to a successful analytics engagement with internal business customers? How can I get them to cooperate with me, to take my findings seriously, and above all to actually implement changes to business processes based on my findings?

The Seer smiled. Or was it a sneer? “Can you handle the truth?” he said.

That sounded vaguely familiar… “Didn’t Jack Nicholson say that in…”

“Jack got it from me!” the Seer snapped. “Can you handle the truth?”

“I… think so…”, I sputtered. “Yes! Give it to me straight!”

The contempt-o-meter

“Here’s the thing.” said the Seer, sniffing the air for the slightest hint of burning baklava. “As soon as you start having feelings of contempt for your internal customers – let’s call them your clients for short – you’re done. Cash in your chips. Go home. It’s over. Humans are exquisitely fine-tuned to sense the feelings of others. It’s simply impossible to hide your feelings of contempt from your clients. And nobody wants to work with some pretentious jackass who they sense is always looking down on them .”

“Well, no problem there!” I preened. “I pride myself on always being professional and respectful towards my clients.”

Oh, really?” he said, his smile-sneer growing wider.

“Clients can sense how you feel about them. Ever sat in a meeting with your clients, and thought to yourself that you’re surrounded by mouth breathing knuckle draggers?”

“Well… ”

“They can sense how you feel about them. Ever complained to your colleagues about how woefully misguided your clients are?”

“I…. ”

“They can sense how you feel about them. Ever… ”

“Okay, okay, I get it. I can see how thinking about my clients like that is not going to win me their cooperation or help me be an effective Data Scientist, which is all about changing a company’s behavior in some way, whether big or small. But exactly how can I silence my judgmental internal dialog?”

First things first

“Build empathy for your clients. Empathy… A good, solid Greek word if there ever was one. And empathy in the context of cross-organizational relationships is not about moral virtue. It’s about getting things done that are good for the business, and good for your career at the same time. Oh, and you can’t fake empathy. The contempt radar that I mentioned earlier? Yeah, it also detects insincerity.”

A piece of the Seer’s ear fell off, but it didn’t seem to bother him.

“But before we get into my specific advice for increasing your empathy for internal customers,” he said, “you have to be honest with yourself. Do you really giving a rat’s ass about their happiness? Really? If yes, then continue. If not, then go back to raising your goats, or practicing your lute, or weaving your baskets from found human hair to sell on Etsy. Building empathy is very hard work. But it’s also the only path I’ve found to delivering happiness to internal customers, which turns out to be the golden road to effectiveness as you’ve defined it. So, do you have a heartfelt desire to make your clients happy?”

I nodded.

You’re wrong. You’re just wrong.

“What if I told you that everything you know is a lie?” quizzed the Seer. I was expecting him to launch into the whole red-pill blue-pill thing, but he skipped it.

“Your contempt for your internal customer is built on your perceptions. You think you know all about your client. But your mind is endlessly filling-in knowledge gaps with fantasy. Your mind constructs your perceptions out of teensy bits of reality plus huge doses of stereotypes and random gastric disturbances. Think about how often in the past you’ve misread people and situations, and you’ll realize that much of what you think you know about your client is probably just plain wrong.”

Beginner’s mind

The Seer took another whiff of air, like an Irish Setter on the scent.

“Here’s one idea for cultivating authentic empathy. Two mountains over there are these Buddhists. They’re an absolute riot at my fondue parties, by the way… Anyhow, they talk about the importance of having a beginner’s mind. It means that, regardless of how many decades you’ve been practicing meditation, you should approach each new meditation session as if it were your first time meditating. You should approach it with anticipation, curiosity, and an openness to being surprised.”

“What if you took the same kind of approach to your internal customers?”

On not peeing into the wind

“For example: What if you took them to lunch, and asked them questions that helped you to deeply understand what makes them tick at work:

1) What brings them joy in their job?
2) What brings them dread?
3) What are their career aspirations?
4) What makes their boss praise them?
5) What makes their boss yell at them?
6) What must they do to achieve their job goals (and hence their bonus)?

“You were expecting project-related questions, right?” the Seer sneer-smiled.

“The truth is, nobody gives a sh*t about your equations and graphs, per se. But they deeply give a sh*t about how your equations and graphs might impact them along those personal dimensions, both positively and negatively. They’ll never say so, but they do. They might not even consciously realize it, but they do.”

“So, what I’m saying is, as you think about how you are going to get your Brilliant Data Science Idea implemented, deconstruct those personal dimensions of your clients, and then explain the benefits of your Brilliant Data Science Idea to them in ways that address those personal dimensions.”

“But isn’t that manipulative?, you ask. Only if it’s done with malice, I answer.”

“Here’s an analogy. You are out sailing on the wine dark sea, and you want to get your little boat from point A to point B, because you’ve heard that the feta is amazing at point B. Isn’t it wise to consider where the winds are blowing, and where the shoals are lurking, and to get aligned with the great forces of nature, rather than be willfully ignorant of them?  Isn’t it better to leverage those forces, rather than to fight them? Where is the manipulation and malice in that?”

“Look, I don’t expect you to just take my word for it. Try it for yourself. Experiment with it. Play with it. Then come back and tell me how it went.”

Baklava’s done

The sweet smell of freshly baked baklava was now competing with the Seer’s formidable stench. “I love the smell of baklava in the morning!” said the Seer.

“Thanks for the advice,” I said. But it sounds like very hard work. Interpersonal skills… Change management strategies…. These are not exactly part of the standard Data Science repertoire.”

“True dat,” said the Seer, winking at me with his one good eye.

“But luckily you don’t have to be perfect at it. Because you know what they say… In the land of the blind, the one-eyed man is…”

“… king!” I answered.

And with that, the Seer was gone.

Please feel free to contact me via LinkedIn

Source: A Super Scary Halloween Story for Data Scientists and other Change Agents by groumeliotis

Who Is Your ‘Biggest Fan’ on Facebook? Navigating the Facebook Graph API ~ 2016 Tutorial

Before we begin, here’s a working example of this quick Facebook app on my Github 🙂

facebook-api1

There are a few (or a lot, depending on your excitement) cool things you can do with the Facebook Graph API.

First of all, what is the Graph API?

  1. In short, it’s our way of getting Facebook goodies like posts, pictures, status updates, friends list, all that good stuff. We can also post data (AKA update our status) using the Graph API.

For this mini blog tutorial, I’m going to cover the getting part.

In particular, I’ll be demonstrating how to find:

  1. Your most liked posts
  2. The friends who most like your posts (your biggest fans)

I’ll be using JavaScript and Python for this tutorial. No worries if these languages aren’t your go-to; the concepts I cover in this tutorial are constant across all languages.

Let’s roll.

1. Boilerplate (boring stuff) out of the way

First, let’s set up a very simple JavaScript SDK so we can talk to Facebook using JavaScript.

I didn’t want to waste precious code space with boilerplate code, so check it out my Github

2. Basic GET request

Let’s do a basic GET request. Let’s get all my posts, messages, stories along with the likes and comments associated with each post.

function getPosts() {
   FB.api('me/posts/?fields=comments.summary(true),likes.summary(true).fields(name), message,  story',
   function(response) {
       passPosts(JSON.stringify(response))
   });
} 

Ignore the passPosts() function for now

This returns a JSON response as such:

{
 "data": [
     {
      "message": "something about the bao",
      "story": "Nikhil Bhaskar updated his profile picture.",
      "id": "[id of post]",
      "likes": {
        "data": [
          {
            "id": "[id of liker]"
            "name": "[name of liker]"
          },
         ],
       "summary": {
          "total_count": [num of likes],
          "can_like": true,
          "has_liked": false
        }
      },
      ....{}, {}, {}
     ],
    "paging": {
      "previous": "[url]",
      "next": "[url]"
     }
 }

Notice how our result has been paginated. In other words, to actually get all of our posts, we need to run an API call again, on the “paging”:”next”: url.

We should avoid multiple API calls whenever necessary, so let’s slightly modify our basic GET request.

'me/posts/?limit=5000&fields=comments.summary(true),likes.summary(true).fields(name), message,  story'

Notice how now, we include a limit of 5000 posts. It seems to me that this is the max limit we can set (I’m not sure; it was more trial & error here). This way, we get as many posts as we can in one API call. Consequently, we greatly reduce the number of API calls we make.

Learner’s check:

  1. Our ‘response’ object is a JavaScript object. In order for us to easily pass it around we convert it to a string with JSON.stringify()

3. AJAX call to Python

Let’s pass our JSON response to our Python backend so we can further process it.

function passPosts(userPosts){
        $.ajax({
          method: "POST",
          url: "/fb_login/",
          data: { 
            "user_posts": userPosts
            }
        })
        .success(function(data) {
          //handle results
        });
      }

Learner’s check

  1. We call our AJAX function (POST) to the url route ‘fb_login’, with our userPosts

4. Python ~ Get the AJAX POST data

Side note: I am using Django. You can use whatever framework (or no framework) you want

In views.py, let’s get our Facebook API response:

def fb_login(request):
	if request.method == 'POST':

                all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		
               '''Ignore rest of function for now
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		my_most_liked_post = sort_posts_dict_by_likes(all_posts_dict)[0]
		print my_most_liked_post
                '''
	return render(request, 'talentur/fb_login.html')

What does ‘get_all_posts_dict(arg)’ do?

def get_all_posts_dict(response):
	return tornado_all_posts_dict(string_to_dict(response))

As you can see, it calls 2 functions. So it does 2 tasks:

  1. Convert our response to a Python dictionary
  2. Call a function on this dictionary to get all of our posts (remember, the JSON response we got was paginated)

Here, we achieve task 1 with our string_to_dict function:

def string_to_dict(json_string):
    return json.loads(json_string)

And here, we call a recursive function tornado_all_posts_dict to achieve task 2:

def tornado_all_posts_dict(response_dict, master_posts = None ):
	master_posts = {'data':[]} if master_posts is None else master_posts
	posts = response_dict['data']
	master_posts['data'] = master_posts['data'] + posts
	if 'paging' in response_dict and 'next' in response_dict['paging']:
		r = requests.get(response_dict['paging']['next']).json()
		tornado_all_posts_dict(r, master_posts)
	return master_posts

5. Find your most liked posts

We gotta do a little clean up, first. As you’ve noticed, the all_posts_dict is a Python dictionary with a “data” property.

“data” is a list of several dictionaries. Each dictionary in “data” is basically a post/message/story, etc. The problem is that some of these dictionaries don’t have a “likes” property.

Example:

{
      "id": "[id]"
}, ...

These are probably just occurrences when you change your cover photo to a photo you’ve already used before, for example. Although there are “likes” associated with your cover photo, there are no “likes” associated with the act of updating your cover photo back to this old picture. Make sense?

So, let’s remove all the dictionaries in “data” that have no “likes” property

def remove_dicts_from_list_based_on_key(response_dict, key):
	the_list = response_dict['data']
	return [dicti for dicti in the_list if key in dicti]

So, in views.py, in def fb_login function, add:

all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

Now, we can sort our all_posts_dict by “likes”:

def sort_posts_dict_by_likes(response_dict):
	list_of_user_post_objects = response_dict['data']
	list_of_user_post_objects = sorted(list_of_user_post_objects, key=lambda k: -k['likes']['summary']['total_count']) 
	return list_of_user_post_objects

Learner’s check

  1. “likes” has a “summary” property, which in turn has a “total_count property”
  2. “total_count” is the number we care about here
  3. -k because are sorting in descending order

Now, in views.py, the def fb_login function should look like this:

def fb_login(request):
	if request.method == 'POST':

		all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		my_most_liked_post = sort_posts_dict_by_likes(all_posts_dict)[0]
		print my_most_liked_post
		
	return render(request, 'talentur/fb_login.html')

Our response:

{u'message': u'AHAHAHAHHA', u'id': u'1090366184360236_310878925642303', u'comments': {u'data': [], u'summary': {u'total_count': 171, u'can_comment': True, u'order': u'chronological'}}, u'likes': {u'data': [], u'summary': {u'total_count': 1643, u'has_liked': False, u'can_like': True}}}

This was a post I shared a long time ago. Got over a 1000 likes, haha

65793_310878908975638_1907833459_n

Obviously, our actual result is just a Python dictionary. But you can use its id to get everything associated with this post.

Let’s move on..

6. Find your biggest fans

First, let’s put every single friend who liked your posts into a list of tuples. This tuple will contain the id and name of your friend

def liker_ids_tornado(response, like_ids_list = None):
	like_ids_list = [] if like_ids_list is None else like_ids_list
	data_list = response['data']

	for post_message_story in data_list:
		if 'likes' in post_message_story:
			for liker in post_message_story['likes']['data']:
				like_ids_list.append((liker['id'], liker['name']))
	if 'paging' in response and 'next' in response['paging']:
		r = requests.get(response['paging']['next']).json()
		liker_ids_tornado(r, like_ids_list)
	return like_ids_list

Now, let’s use the convenient Counter() function from the ‘collections’ module

def get_most_likers(like_ids_list):
	id_results_dict = Counter(like_ids_list)
	return id_results_dict

Here’s what our def fb_login function looks like now:

def fb_login(request):
	if request.method == 'POST':

		all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		like_ids_list = liker_ids_tornado(all_posts_dict)
		my_biggest_fans = get_most_likers(like_ids_list)
		print my_biggest_fans

	return render(request, 'talentur/fb_login.html')

Our response:

Counter({(u'id', u'Name'): 101, (u'id2', u'Name2'):97...}) 

The result is a Counter, which is a subclass of a Dictionary. So, my ‘biggest fan’ (who I won’t disclose here) has given me a total of 101 likes.

There you have it. A little Facebook insight for ya.

Enjoy 🙂

Once again, here’s a working example of this quick Facebook app on my Github 🙂

Originally Posted at: Who Is Your ‘Biggest Fan’ on Facebook? Navigating the Facebook Graph API ~ 2016 Tutorial

May 03, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
statistical anomaly  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Santa 2.0, What Santa could do with technology by v1shal

>> How Airbnb Uses Big Data And Machine Learning To Guide Hosts To The Perfect Price by analyticsweekpick

>> 80/20 Rule For Startups by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Marsh Enhances Cyber Risk Products to Address Business Interruption Risks – Insurance Journal Under  Risk Analytics

>>
 Social Media Analytics Market: Rapidly Growing Dynamic Markets – CMFE News (press release) (blog) Under  Social Analytics

>>
 Alabama Passes Data Security and Data Breach Notification Statute – JD Supra (press release) Under  Data Security

More NEWS ? Click Here

[ FEATURED COURSE]

Python for Beginners with Examples

image

A practical Python course for beginners with examples and exercises…. more

[ FEATURED READ]

The Industries of the Future

image

The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:Give examples of data that does not have a Gaussian distribution, nor log-normal?
A: * Allocation of wealth among individuals
* Values of oil reserves among oil fields (many small ones, a small number of large ones)

Source

[ VIDEO OF THE WEEK]

@ChuckRehberg / @TrigentSoftware on Translating Technology to Solve Business Problems #FutureOfData #Podcast

 @ChuckRehberg / @TrigentSoftware on Translating Technology to Solve Business Problems #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom. – Clifford Stoll

[ PODCAST OF THE WEEK]

Want to fix #DataScience ? fix #governance by @StephenGatchell @Dell #FutureOfData #Podcast

 Want to fix #DataScience ? fix #governance by @StephenGatchell @Dell #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Brands and organizations on Facebook receive 34,722 Likes every minute of the day.

Sourced from: Analytics.CLUB #WEB Newsletter

2016 Trends in Big Data: Insights and Action Turn Big Data Small

Big data’s salience throughout the contemporary data sphere is all but solidified. Gartner indicates its technologies are embedded within numerous facets of data management, from conventional analytics to sophisticated data science issues.

Consequently, expectations for big data will shift this year. It is no longer sufficient to justify big data deployments by emphasizing the amount and sundry of types of data these technologies ingest, but rather the specific business value they create by offering targeted applications and use cases providing, ideally, quantifiable results.

The shift in big data expectations, then, will go from big to small. That transformation in the perception and deployments of big data will be spearheaded by numerous aspects of data management, from the evolving roles of Chief Data Officers to developments in the Internet of Things. Still, the most notable trends impacting big data will inevitably pertain to the different aspects of:

• Ubiquitous Machine Learning: Machine learning will prove one of the most valuable technologies for reducing time to insight and action for big data. Its propensity for generating future algorithms based on the demonstrated use and practicality of current ones can improve analytics and the value it yields. It can also expedite numerous preparation processes related to data integration, cleansing, transformation and others, while smoothing data governance implementation.
• Cloud-Based IT Outsourcing: The cloud benefits of scale, cost, and storage will alter big data initiatives by transforming IT departments. The new paradigm for this organizational function will involve a hybridized architecture in which all but the most vital and longstanding systems are outsourced to complement existing infrastructure.
• Data Science for Hire: Whereas some of the more complicated aspects of data science (tailoring solutions to specific business processes) will remain tenuous, numerous aspects of this discipline have become automated and accelerated. The emergence of a market for algorithms, Machine Learning-as-a-Service, and self-service data discovery and management tools will spur this trend.

From Machine Learning to Artificial Intelligence
The correlation between these three trends is probably typified by the increasing prevalence of machine learning, which is an integral part of many of the analytics functions that IT departments are outsourcing and aspects of data science that have become automated. The expectations for machine learning will truly blossom this year, with Gartner offering numerous predictions for the end of the decade in which elements of artificial intelligence are normative parts of daily business activities. The projected expansion of the IoT and the automated activity required of the predictive analytics required for its continued growth will increase the reliance on machine learning, while its applications in various data preparation and governance tools are equally as vital.

Nonetheless, the chief way in which machine learning will help to shift the focus of big data from sprawling to narrow relates to the fact that it either eschews or hastens human involvement in all of the aforementioned processes, and in many others as well. Forrester predicted that: “Machine learning will replace manual data wrangling and data governance dirty work…The freeing up of time will accelerate the execution of data and analytics strategies, allowing organizations to get to the good stuff, taking actions and driving better business outcomes based on the data.” Machine learning will enable organizations to spend less time managing their data and more time creating action from the insights they provide.

Accelerating data management processes also enables users to spend more time understanding their data. John Rueter, Vice President of Marketing at Cambridge Semantics, denoted the importance of establishing the context and meaning of data. “Everyone is in such a race to collect as much data as they can and store it so they can get to it when they want to, when oftentimes they really aren’t thinking ahead of time about what they want to do with it, and how it is going to be used. The fact of the matter is what’s the point of collecting all this data if you don’t understand it?”

Cloud-Based IT
The trend of outsourcing IT to the cloud is evinced in a number of different ways, from a distributed model of data management to one in which IT resources are more frequently accessed through the cloud. The variation of basic data management services that the enterprise is able to outsource via the cloud (including analytics, integration, computations, CRM, etc.) are revamping typical architectural concerns, which are increasingly involving the cloud. These facts are substantiated by IDC’s predictions that, “By 2018, at least 50 % of IT spending will be cloud based. By 2018, 65 % of all enterprise IT assets will be housed offsite and 33% of IT staff will be employed by third-party, managed service providers.”

The impact of this trend goes beyond merely extending the cloud’s benefits of decreased infrastructure, lower costs, and greater agility. It means that a number of pivotal facets of data management will require less daily manipulating on the part of the enterprise, and that end users can implements the results of those data driven processes quicker and for more specific use cases. Additionally, this trend heralds a fragmentation of the CDO role. The inherent decentralization involved in outsourcing IT functions through the cloud will be reflected in an evolution of this position. The foregoing Forrester post notes that “We will likely see fewer CDOs going forward but more chief analytics officers, or chief data scientists. The role will evolve, not disappear.”

Self-Service Data Science
Data science is another realm in which the other two 2016 trends in big data coalesce. The predominance of machine learning helps to improve the analytical insight gleaned from data science, just as a number of key attributes of this discipline are being outsourced and accessed through the cloud. Those include numerous facets of the analytics process including data discovery, source aggregation, multiple types of analytics and, in some instances, even analysis of the results themselves. As Forrester indicated, “Data science and real-time analytics will collapse the insights time-to-market. The trending of data science and real-time data capture and analytics will continue to close the gaps between data, insight and action. In 2016, Forrester predicts: “A third of firms will pursue data science through outsourcing and technology. Firms will turn to insights services, algorithms markets, and self-service advanced analytics tools, and cognitive computing capabilities, to help fill data science gaps.”

Self-service data science options for analytics encompass myriad forms, from providers that provision graph analytics, Machine Learning-as-a-Service, and various forms of cognitive computing. The burgeoning algorithms market is a vital aspect of this automation of data science, and enables companies to leverage previously existent algorithms with their own data. Some algorithms are stratified according to use cases for data according to business unit or vertical industry. Similarly, Machine Learning-as-a-Service options provide excellent starting points for organizations to simply add their data and reap predictive analytics capabilities.

Targeting Use Cases to Shrink Big Data
The principal point of commonality between all of these trends is the furthering of the self-service movement and the ability it gives end users to hone in on the uses of data, as opposed to merely focusing on the data itself and its management. The ramifications are that organizations and individual users will be able to tailor and target their big data deployments for individualized use cases, creating more value at the departmental and intradepartmental levels…and for the enterprise as a whole. The facilitation of small applications and uses of big data will justify this technology’s dominance of the data landscape.

Source: 2016 Trends in Big Data: Insights and Action Turn Big Data Small