December 12, 2016 Health and Biotech analytics news roundup

Here’s the latest in health and biotech analytics:

Potential for wide-scale whole-genome sequencing in humans using nanopore approaches: Researchers have recently sequenced an entire human genome using Oxford Nanopore’s hand-held MinION devices. These instruments cost less than $1000 plus consumables, and have the potential to change the economics of sequencing.

Advanced Plan for Health Upgrades Next-Generation Predictive Analytics On Poindexter Population Health Management Platform: The platform currently assigns broad risk scores to patients based on demographic data. Now, the company claims to be able to predict the likelihood of specific catastrophic illnesses.

Can Blockchain Give Healthcare Payers Better Analytical Insight?: Jennifer Resnick summarizes a report from Deloitte on the technology. Potential uses include reducing overhead, detecting fraud, and managing provider directories.

Health Tech 2016: A Year To Recalibrate: David Shaywitz writes about the apparent gap between the promise of personalized medicine and actual progress. He understands that new techniques need to actually show real-world value, yet also sees the need for optimism as an impetus for advancement

Originally Posted at: December 12, 2016 Health and Biotech analytics news roundup by pstein

A Good Patient Experience Does not Start with Medical Spending

Patient experience (PX) has become an important topic for US hospitals. The Centers for Medicare & Medicaid Services (CMS) will be using patient feedback about their care as part of their reimbursement plan for acute care hospitals (see Hospital Value-Based Purchasing (VBP) program). According to QualityNet, the purpoase of the VBP program is to promote better clinical outcomes for patients and improve their experience of care during hospital stays. Not surprisingly, hospitals are focusing on improving the patient experience to ensure they receive the maximum of their incentive payments. But what is the cost of a good patient experience? Does increased hospital spending on medical services translate into a better patient experience?

Medicare Spending Per Beneficiary (MSPB)

Medicare tracks how much they spend on each patient with Medicare who is admitted to a hospital compared to the amount Medicare spends per hospital patient nationally. Also known as “Medicare Spending per Beneficiary (MSPB)”, this measure assesses the cost of care. By measuring cost of care with this measure, CMS hopes to increase the transparency of care for consumers and recognize hospitals that are involved in the provision of high-quality care at lower cost to Medicare.

The MSPB measure for each hospital is calculated as the ratio of the MSPB Amount for the hospital divided by the median MSPB Amount across all hospitals. A hospital with an MSPB value of 1.o indicates that the hospital’s spending per patient is average. A hospital with an MSPB value less than 1.0 indicates that the hospital’s spending per patient is less than the average hospital. A hospital with an MSPB value greater than 1.0 indicates that the hospital’s spending per patient is greater than the average hospital.

Figure 1. Patient Loyalty by Medicare Spending per Beneficiary.

Patient Experience

Patient experience (PX) reflects the patients’ perceptions about their recent inpatient experience. PX is collected by a survey known as HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems). HCAHPS (pronounced “H-caps“) is a national, standardized survey of hospital patients and was developed by a partnership of public and private organizations and was created to publicly report the patient’s perspective of hospital care.

The survey asks a random sample of recently discharged patients about important aspects of their hospital experience. The data set includes patient survey results for over 3800 US hospitals on ten measures of patients’ perspectives of care (e.g., nurse communication, pain well controlled). I combined two general questions (Overall hospital rating and recommend) to create a patient advocacy metric. Thus, a total of 9 PX metrics were used. Across all 9 metrics, hospital scores can range from 0 (bad) to 100 (good). You can see the PX measures for different US hospital here.

Figure 2. Patient Experience by Medicare Spending per Beneficiary

The Relationship Between Medicare Spend and Patient Loyalty/Experience

Hospitals were divided into 10 groups based on their MSPB score. Figure 1 contains the plot of patient advocacy for each of the 10 MSPB levels. Figure 2 contains the plot of patient experience ratings for each of the 10 MSPB levels.

There were statistically significant differences across the 10 segments.  Although these differences were statistically significant, the differences were not substantial. Specifically, the MSPB segments accounted for about 4.5% of the variance in patient metrics.

We might expect hospitals that spend more on medical services per patient would receive higher patient experience ratings. The patients are, after all, receiving more resources directed toward them compared to hospitals who spend less on medical services. If anything, we see that as the cost of care goes up, patient experience actually decreases (however slightly).

Improving Patient Loyalty and the Patient Experience

Improving patient loyalty and the patient experience clearly does not start with medical spend. The results show that hospitals who spend less on medical services compared to other hospitals who spend more on medical services receive comparable marks on their patient experience and patient loyalty scores.
Table 1. Adoption Rates of Customer Feedback Program Practices of Loyalty Leaders and Loyalty Laggards
Figure 3. Adoption Rates of Customer Feedback Program Practices of Loyalty Leaders and Loyalty Laggards

One possible approach to understand patient experience/loyalty differences across hospitals is to understand how hospitals build their patient experience (PX) programs. How mature is their PX program? Do they even have a PX program? In other industries, we know that loyalty leading companies structure their customer experience programs differently than loyalty lagging companies (see Figure 3). Specifically, loyalty leaders: 1) have top executive support of the customer program, 2) communicate all aspects of the program throughout the company and 3) integrate their customer feedback with other business data for deep dive customer research. I suspect these same processes (or something similar) are key to a successful PX program (e.g., high patient loyalty and patient experience) in the hospital setting. But that is an empirical question.

Research on the role of PX programs in hospitals would help hospitals better understand the necessary ingredients they need to improve the patient experience. Many hospitals receive very low marks on their HCAHPS ratings, suggesting they will be penalized on their Medicare payments. This PX program research needs to identify best practices. Findings could help individual hospitals improve their HCAHPS score by adopting/incorporating best practices into their PX programs. Additionally, findings could help the healthcare industry overall by sharing best practices across all hospitals that would remove inefficiencies in healthcare delivery while improving patient satisfaction with their care.

Source: A Good Patient Experience Does not Start with Medical Spending

NSA to crunch big data in AWS C2S

WASHINGTON, D.C. – The National Security Agency is moving some of its IT operations to Amazon’s cloud.

The National Security Agency (NSA) was represented by Alex Voultepsis, chief of the engineering and planning process for the NSA’s Intelligence Community Special Operations Group, at a session during the AWS Public Sector Symposium here this week. Voultepsis said during a panel discussion the agency plans to migrate some of its infrastructure to Amazon Web Services (AWS).

Voultepsis’s unit within the NSA will use Commercial Cloud Services (C2S), the Amazon cloud region established by the Central Intelligence Agency for classified data, which is open to all 17 federal intelligence agencies, according to Amazon officials interviewed after the panel session.

“The capabilities are there to meet our specialized needs for confidentiality, integrity and availability [of data],” Voultepsis said. “We can shift our focus from commodity things to mission-focused customer-facing things.”

The NSA as a whole also operates a private cloud called GovCloud, but for Voultepsis’s unit, C2S offers better value.

“The infrastructure as a service which Amazon provides has shown us significant IT efficiencies,” Voultepsis said, estimating that the savings on infrastructure costs alone will be between 50-55%.

While it was unclear how much of the NSA’s data center had moved already, Voultepsis said the ultimate goal is to be ‘all-in’ and close private data centers.

“It’s a seismic shift in the way we do business,” he said. “We’ve moved away from the concept of putting our big toe in the water with hybrid cloud, because from an efficiencies perspective, if you don’t go all-in and turn off your old [assets], you never gain the efficiencies…of intelligence integration and an enhanced security posture.”

Photo courtesy of Wikipedia
Photo courtesy of Wikipedia

Asked how he imagined the deployment looking in three to five years, Voultepsis said the agency is most looking forward to analyzing big data in AWS.

“The big data concept will come to fruition more broadly than it has…being able to ask questions…that you couldn’t ask in the past,” he said. “Federated, old-style dogpile type searches go away, and you’re asking complex questions against a broad corpus of data  — complex questions that you couldn’t even dream of asking in the past.”

Other federal agencies in the intelligence community have also moved to C2S for new projects. The National Geospatial-Intelligence Agency (NGA) was also represented on the panel by Jason Hess, cloud security manager for the office of the chief information officer.

“We’ve embraced the director of national intelligence’s…vision of providing intelligence integration,” Hess said. “We cannot continue to operate in the silo mentality of each agency not talking to each other…we’re leveraging this initiative to start working together.”

Hess and Kristine A Guisewite, information system security engineer from Raytheon working for the National Reconnaissance Office (NRO), agreed using C2S makes it easier for them to work together. It’s also easier for developers to do research on the platform with the wealth of knowledge available online, then execute specific projects inside the agencies. Having a consistent operating system image deployed to an entire agency also improves security over having to maintain different versions, Guisewite said.

However, there are still some issues with moving to the cloud. Auto Scaling, for example, has been difficult for the NRO to take advantage of, because of security concerns with machines spinning up and down, the security of external interfaces that require an opening in the firewall, and resources which aren’t always operating from the same IP address, according to Guisewite.

Unlike the NSA, the NGA is not ‘all-in,’ according to Hess. The NGA built a new building and state-of-the-art data center just three years ago that many in the agency are loath to abandon.

“It’s a coalition of the willing right now,” Hess said. “We’re in the bottom of the first [inning] in our cloud migration.”

To read the original article on SearchAWS, click here.

Originally Posted at: NSA to crunch big data in AWS C2S by analyticsweekpick

Healthcare Analytics Tips for Business Minded Doctors

healthcare-analytics-carecloud

The promise of data is huge – enormous clinical and financial rewards, less work, quantifying patient health habits and some form of IBM’s revered Watson supercomputer in every practice.

The 2012 U.S. Hospital Health Data Analytics Market report revealed that 50% of U.S. hospitals are expected to have implemented health data analytics tools by 2016, which represents an annual compound growth rate of 37.9%. In an ever-changing healthcare sector, it’s not wise for the private practice to be left behind.

Still, the industry faces serious technical and strategic challenges. Health data is diverse, complicated and unstructured across a range of criteria, making it very difficult for, say, a small practice doctor to penetrate – especially if he/she has limited experience operating in the tech sphere.

Below are some healthcare analytics tips for more business-minded physicians, no prior experience required.

Choose the right system for reporting. If you’re going to use analytics to better organize your business, don’t choose an inefficient one that’ll further complicate matters. A recent KLAS report titled Business Intelligence: Making Cents of Performance outlines some helpful features that providers searching for (or currently using) analytics tools should keep in mind. These include:

  • Quick implementation
  • Easy-to-use interface
  • Customizable to suit unique organizational needs
  • Ability to develop personalized dashboards for users
  • Flexibility to accommodate other parties like pharmacies, health plans, government entities, financial institutions, etc.

Integrate analytics with training. Practices should teach analytics to new hires, from staff members to doctors. This will help every member of the office understand how analytics data helps both patients and the execution of his/her daily tasks, to the point where seeing through data goggles becomes second nature.

Use dashboards for doctors at your practice to visualize data. As analytics move platforms closer to real-time processing and reporting – at the point of care, even – your practice should focus on updating processes and developing capabilities to enable analytics tool use, namely on the topic of real-time clinical decision support.

Use Google Analytics for online marketing efforts. Make sure your Google configuration is up to date and set up metrics goals for your website, e.g., conversion on opt-in forms, and engagement in the form of time on site and page depth.

Spot barriers to analytics adoption. According to an IBM study, titled “The Value of Analytics in Healthcare,” many healthcare executives have a difficult time differentiating bad and good data.

If this is not the case, other common barriers include lack of data-driven culture, lack of connecting the power of analytics to business improvement tactics, lack of management bandwidth or a perception that costs outweigh benefits.

Adopt an EHR that provides business data. Some people are huge fans of 2-in-1 shampoo/conditioners combos. While this is a more a serious investment, purchasing an EHR system with a built-in analytics platform may be the right choice for practices that don’t have the time or budget to seek solo solutions.

Humanize the data. While this may seem obvious, making the data accessible and friendly to humans – who will, after all, be employing, analyzing and making full use of it – is essential. In this case, usability may be the most crucial element of an effective analytics system because you and your staff simply will not use a tool that makes their jobs more difficult.

Note: This article originally appeared in CareCloud. Click for link here.

Source by analyticsweekpick

How to Win Business using Marketing Data [infographics]

How to Win Business using Marketing Data
How to Win Business using Marketing Data

A marketer’s job is to win the hearts and minds of customers and prospects. Even though priority is to clearly accounts for the intricacies of customer’s intellects and emotions, often major of it is in using intellectual triggers and minor of it is in connecting with them emotionally. It has consistently been overlooked. The fact that, when wielded correctly, emotion is a much more potent persuasive force in forging connections than intellect. Following infographic explains how various channels are being utilized and how they are acting to make the business successful.

Source

A Super Scary Halloween Story for Data Scientists and other Change Agents

AAEAAQAAAAAAAAMxAAAAJGZkMjE2NDI1LTk5MzItNGE1NC1hZTUyLTNmZjQ5ZjMzNTBlOA

As the mist swirled and parted, the Seer suddenly appeared before me. He was barely recognizable as human. His skin was deeply etched from a hundred winters spent roaming the barren peaks and crags of Mount Olympus. He regarded me with his one good eye, and croaked words that chilled the air around us… “Hey, wassup? What can I do you for? And make it snappy, ’cause I’ve got baklava in the oven. You know how easy it is to burn baklava?”

“Oh, Seer, I’ve come a long way to ask you something that’s been troubling my soul for a very long time. I’ve been involved in many projects where a centralized Data Science team tries to help internal customers from a business unit leverage analytics in some way. I’ve seen mixed results. What’s the secret to a successful analytics engagement with internal business customers? How can I get them to cooperate with me, to take my findings seriously, and above all to actually implement changes to business processes based on my findings?

The Seer smiled. Or was it a sneer? “Can you handle the truth?” he said.

That sounded vaguely familiar… “Didn’t Jack Nicholson say that in…”

“Jack got it from me!” the Seer snapped. “Can you handle the truth?”

“I… think so…”, I sputtered. “Yes! Give it to me straight!”

The contempt-o-meter

“Here’s the thing.” said the Seer, sniffing the air for the slightest hint of burning baklava. “As soon as you start having feelings of contempt for your internal customers – let’s call them your clients for short – you’re done. Cash in your chips. Go home. It’s over. Humans are exquisitely fine-tuned to sense the feelings of others. It’s simply impossible to hide your feelings of contempt from your clients. And nobody wants to work with some pretentious jackass who they sense is always looking down on them .”

“Well, no problem there!” I preened. “I pride myself on always being professional and respectful towards my clients.”

Oh, really?” he said, his smile-sneer growing wider.

“Clients can sense how you feel about them. Ever sat in a meeting with your clients, and thought to yourself that you’re surrounded by mouth breathing knuckle draggers?”

“Well… ”

“They can sense how you feel about them. Ever complained to your colleagues about how woefully misguided your clients are?”

“I…. ”

“They can sense how you feel about them. Ever… ”

“Okay, okay, I get it. I can see how thinking about my clients like that is not going to win me their cooperation or help me be an effective Data Scientist, which is all about changing a company’s behavior in some way, whether big or small. But exactly how can I silence my judgmental internal dialog?”

First things first

“Build empathy for your clients. Empathy… A good, solid Greek word if there ever was one. And empathy in the context of cross-organizational relationships is not about moral virtue. It’s about getting things done that are good for the business, and good for your career at the same time. Oh, and you can’t fake empathy. The contempt radar that I mentioned earlier? Yeah, it also detects insincerity.”

A piece of the Seer’s ear fell off, but it didn’t seem to bother him.

“But before we get into my specific advice for increasing your empathy for internal customers,” he said, “you have to be honest with yourself. Do you really giving a rat’s ass about their happiness? Really? If yes, then continue. If not, then go back to raising your goats, or practicing your lute, or weaving your baskets from found human hair to sell on Etsy. Building empathy is very hard work. But it’s also the only path I’ve found to delivering happiness to internal customers, which turns out to be the golden road to effectiveness as you’ve defined it. So, do you have a heartfelt desire to make your clients happy?”

I nodded.

You’re wrong. You’re just wrong.

“What if I told you that everything you know is a lie?” quizzed the Seer. I was expecting him to launch into the whole red-pill blue-pill thing, but he skipped it.

“Your contempt for your internal customer is built on your perceptions. You think you know all about your client. But your mind is endlessly filling-in knowledge gaps with fantasy. Your mind constructs your perceptions out of teensy bits of reality plus huge doses of stereotypes and random gastric disturbances. Think about how often in the past you’ve misread people and situations, and you’ll realize that much of what you think you know about your client is probably just plain wrong.”

Beginner’s mind

The Seer took another whiff of air, like an Irish Setter on the scent.

“Here’s one idea for cultivating authentic empathy. Two mountains over there are these Buddhists. They’re an absolute riot at my fondue parties, by the way… Anyhow, they talk about the importance of having a beginner’s mind. It means that, regardless of how many decades you’ve been practicing meditation, you should approach each new meditation session as if it were your first time meditating. You should approach it with anticipation, curiosity, and an openness to being surprised.”

“What if you took the same kind of approach to your internal customers?”

On not peeing into the wind

“For example: What if you took them to lunch, and asked them questions that helped you to deeply understand what makes them tick at work:

1) What brings them joy in their job?
2) What brings them dread?
3) What are their career aspirations?
4) What makes their boss praise them?
5) What makes their boss yell at them?
6) What must they do to achieve their job goals (and hence their bonus)?

“You were expecting project-related questions, right?” the Seer sneer-smiled.

“The truth is, nobody gives a sh*t about your equations and graphs, per se. But they deeply give a sh*t about how your equations and graphs might impact them along those personal dimensions, both positively and negatively. They’ll never say so, but they do. They might not even consciously realize it, but they do.”

“So, what I’m saying is, as you think about how you are going to get your Brilliant Data Science Idea implemented, deconstruct those personal dimensions of your clients, and then explain the benefits of your Brilliant Data Science Idea to them in ways that address those personal dimensions.”

“But isn’t that manipulative?, you ask. Only if it’s done with malice, I answer.”

“Here’s an analogy. You are out sailing on the wine dark sea, and you want to get your little boat from point A to point B, because you’ve heard that the feta is amazing at point B. Isn’t it wise to consider where the winds are blowing, and where the shoals are lurking, and to get aligned with the great forces of nature, rather than be willfully ignorant of them?  Isn’t it better to leverage those forces, rather than to fight them? Where is the manipulation and malice in that?”

“Look, I don’t expect you to just take my word for it. Try it for yourself. Experiment with it. Play with it. Then come back and tell me how it went.”

Baklava’s done

The sweet smell of freshly baked baklava was now competing with the Seer’s formidable stench. “I love the smell of baklava in the morning!” said the Seer.

“Thanks for the advice,” I said. But it sounds like very hard work. Interpersonal skills… Change management strategies…. These are not exactly part of the standard Data Science repertoire.”

“True dat,” said the Seer, winking at me with his one good eye.

“But luckily you don’t have to be perfect at it. Because you know what they say… In the land of the blind, the one-eyed man is…”

“… king!” I answered.

And with that, the Seer was gone.

Please feel free to contact me via LinkedIn

Source: A Super Scary Halloween Story for Data Scientists and other Change Agents by groumeliotis

Who Is Your ‘Biggest Fan’ on Facebook? Navigating the Facebook Graph API ~ 2016 Tutorial

Before we begin, here’s a working example of this quick Facebook app on my Github 🙂

facebook-api1

There are a few (or a lot, depending on your excitement) cool things you can do with the Facebook Graph API.

First of all, what is the Graph API?

  1. In short, it’s our way of getting Facebook goodies like posts, pictures, status updates, friends list, all that good stuff. We can also post data (AKA update our status) using the Graph API.

For this mini blog tutorial, I’m going to cover the getting part.

In particular, I’ll be demonstrating how to find:

  1. Your most liked posts
  2. The friends who most like your posts (your biggest fans)

I’ll be using JavaScript and Python for this tutorial. No worries if these languages aren’t your go-to; the concepts I cover in this tutorial are constant across all languages.

Let’s roll.

1. Boilerplate (boring stuff) out of the way

First, let’s set up a very simple JavaScript SDK so we can talk to Facebook using JavaScript.

I didn’t want to waste precious code space with boilerplate code, so check it out my Github

2. Basic GET request

Let’s do a basic GET request. Let’s get all my posts, messages, stories along with the likes and comments associated with each post.

function getPosts() {
   FB.api('me/posts/?fields=comments.summary(true),likes.summary(true).fields(name), message,  story',
   function(response) {
       passPosts(JSON.stringify(response))
   });
} 

Ignore the passPosts() function for now

This returns a JSON response as such:

{
 "data": [
     {
      "message": "something about the bao",
      "story": "Nikhil Bhaskar updated his profile picture.",
      "id": "[id of post]",
      "likes": {
        "data": [
          {
            "id": "[id of liker]"
            "name": "[name of liker]"
          },
         ],
       "summary": {
          "total_count": [num of likes],
          "can_like": true,
          "has_liked": false
        }
      },
      ....{}, {}, {}
     ],
    "paging": {
      "previous": "[url]",
      "next": "[url]"
     }
 }

Notice how our result has been paginated. In other words, to actually get all of our posts, we need to run an API call again, on the “paging”:”next”: url.

We should avoid multiple API calls whenever necessary, so let’s slightly modify our basic GET request.

'me/posts/?limit=5000&fields=comments.summary(true),likes.summary(true).fields(name), message,  story'

Notice how now, we include a limit of 5000 posts. It seems to me that this is the max limit we can set (I’m not sure; it was more trial & error here). This way, we get as many posts as we can in one API call. Consequently, we greatly reduce the number of API calls we make.

Learner’s check:

  1. Our ‘response’ object is a JavaScript object. In order for us to easily pass it around we convert it to a string with JSON.stringify()

3. AJAX call to Python

Let’s pass our JSON response to our Python backend so we can further process it.

function passPosts(userPosts){
        $.ajax({
          method: "POST",
          url: "/fb_login/",
          data: { 
            "user_posts": userPosts
            }
        })
        .success(function(data) {
          //handle results
        });
      }

Learner’s check

  1. We call our AJAX function (POST) to the url route ‘fb_login’, with our userPosts

4. Python ~ Get the AJAX POST data

Side note: I am using Django. You can use whatever framework (or no framework) you want

In views.py, let’s get our Facebook API response:

def fb_login(request):
	if request.method == 'POST':

                all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		
               '''Ignore rest of function for now
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		my_most_liked_post = sort_posts_dict_by_likes(all_posts_dict)[0]
		print my_most_liked_post
                '''
	return render(request, 'talentur/fb_login.html')

What does ‘get_all_posts_dict(arg)’ do?

def get_all_posts_dict(response):
	return tornado_all_posts_dict(string_to_dict(response))

As you can see, it calls 2 functions. So it does 2 tasks:

  1. Convert our response to a Python dictionary
  2. Call a function on this dictionary to get all of our posts (remember, the JSON response we got was paginated)

Here, we achieve task 1 with our string_to_dict function:

def string_to_dict(json_string):
    return json.loads(json_string)

And here, we call a recursive function tornado_all_posts_dict to achieve task 2:

def tornado_all_posts_dict(response_dict, master_posts = None ):
	master_posts = {'data':[]} if master_posts is None else master_posts
	posts = response_dict['data']
	master_posts['data'] = master_posts['data'] + posts
	if 'paging' in response_dict and 'next' in response_dict['paging']:
		r = requests.get(response_dict['paging']['next']).json()
		tornado_all_posts_dict(r, master_posts)
	return master_posts

5. Find your most liked posts

We gotta do a little clean up, first. As you’ve noticed, the all_posts_dict is a Python dictionary with a “data” property.

“data” is a list of several dictionaries. Each dictionary in “data” is basically a post/message/story, etc. The problem is that some of these dictionaries don’t have a “likes” property.

Example:

{
      "id": "[id]"
}, ...

These are probably just occurrences when you change your cover photo to a photo you’ve already used before, for example. Although there are “likes” associated with your cover photo, there are no “likes” associated with the act of updating your cover photo back to this old picture. Make sense?

So, let’s remove all the dictionaries in “data” that have no “likes” property

def remove_dicts_from_list_based_on_key(response_dict, key):
	the_list = response_dict['data']
	return [dicti for dicti in the_list if key in dicti]

So, in views.py, in def fb_login function, add:

all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

Now, we can sort our all_posts_dict by “likes”:

def sort_posts_dict_by_likes(response_dict):
	list_of_user_post_objects = response_dict['data']
	list_of_user_post_objects = sorted(list_of_user_post_objects, key=lambda k: -k['likes']['summary']['total_count']) 
	return list_of_user_post_objects

Learner’s check

  1. “likes” has a “summary” property, which in turn has a “total_count property”
  2. “total_count” is the number we care about here
  3. -k because are sorting in descending order

Now, in views.py, the def fb_login function should look like this:

def fb_login(request):
	if request.method == 'POST':

		all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		my_most_liked_post = sort_posts_dict_by_likes(all_posts_dict)[0]
		print my_most_liked_post
		
	return render(request, 'talentur/fb_login.html')

Our response:

{u'message': u'AHAHAHAHHA', u'id': u'1090366184360236_310878925642303', u'comments': {u'data': [], u'summary': {u'total_count': 171, u'can_comment': True, u'order': u'chronological'}}, u'likes': {u'data': [], u'summary': {u'total_count': 1643, u'has_liked': False, u'can_like': True}}}

This was a post I shared a long time ago. Got over a 1000 likes, haha

65793_310878908975638_1907833459_n

Obviously, our actual result is just a Python dictionary. But you can use its id to get everything associated with this post.

Let’s move on..

6. Find your biggest fans

First, let’s put every single friend who liked your posts into a list of tuples. This tuple will contain the id and name of your friend

def liker_ids_tornado(response, like_ids_list = None):
	like_ids_list = [] if like_ids_list is None else like_ids_list
	data_list = response['data']

	for post_message_story in data_list:
		if 'likes' in post_message_story:
			for liker in post_message_story['likes']['data']:
				like_ids_list.append((liker['id'], liker['name']))
	if 'paging' in response and 'next' in response['paging']:
		r = requests.get(response['paging']['next']).json()
		liker_ids_tornado(r, like_ids_list)
	return like_ids_list

Now, let’s use the convenient Counter() function from the ‘collections’ module

def get_most_likers(like_ids_list):
	id_results_dict = Counter(like_ids_list)
	return id_results_dict

Here’s what our def fb_login function looks like now:

def fb_login(request):
	if request.method == 'POST':

		all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		like_ids_list = liker_ids_tornado(all_posts_dict)
		my_biggest_fans = get_most_likers(like_ids_list)
		print my_biggest_fans

	return render(request, 'talentur/fb_login.html')

Our response:

Counter({(u'id', u'Name'): 101, (u'id2', u'Name2'):97...}) 

The result is a Counter, which is a subclass of a Dictionary. So, my ‘biggest fan’ (who I won’t disclose here) has given me a total of 101 likes.

There you have it. A little Facebook insight for ya.

Enjoy 🙂

Once again, here’s a working example of this quick Facebook app on my Github 🙂

Originally Posted at: Who Is Your ‘Biggest Fan’ on Facebook? Navigating the Facebook Graph API ~ 2016 Tutorial

2016 Trends in Big Data: Insights and Action Turn Big Data Small

Big data’s salience throughout the contemporary data sphere is all but solidified. Gartner indicates its technologies are embedded within numerous facets of data management, from conventional analytics to sophisticated data science issues.

Consequently, expectations for big data will shift this year. It is no longer sufficient to justify big data deployments by emphasizing the amount and sundry of types of data these technologies ingest, but rather the specific business value they create by offering targeted applications and use cases providing, ideally, quantifiable results.

The shift in big data expectations, then, will go from big to small. That transformation in the perception and deployments of big data will be spearheaded by numerous aspects of data management, from the evolving roles of Chief Data Officers to developments in the Internet of Things. Still, the most notable trends impacting big data will inevitably pertain to the different aspects of:

• Ubiquitous Machine Learning: Machine learning will prove one of the most valuable technologies for reducing time to insight and action for big data. Its propensity for generating future algorithms based on the demonstrated use and practicality of current ones can improve analytics and the value it yields. It can also expedite numerous preparation processes related to data integration, cleansing, transformation and others, while smoothing data governance implementation.
• Cloud-Based IT Outsourcing: The cloud benefits of scale, cost, and storage will alter big data initiatives by transforming IT departments. The new paradigm for this organizational function will involve a hybridized architecture in which all but the most vital and longstanding systems are outsourced to complement existing infrastructure.
• Data Science for Hire: Whereas some of the more complicated aspects of data science (tailoring solutions to specific business processes) will remain tenuous, numerous aspects of this discipline have become automated and accelerated. The emergence of a market for algorithms, Machine Learning-as-a-Service, and self-service data discovery and management tools will spur this trend.

From Machine Learning to Artificial Intelligence
The correlation between these three trends is probably typified by the increasing prevalence of machine learning, which is an integral part of many of the analytics functions that IT departments are outsourcing and aspects of data science that have become automated. The expectations for machine learning will truly blossom this year, with Gartner offering numerous predictions for the end of the decade in which elements of artificial intelligence are normative parts of daily business activities. The projected expansion of the IoT and the automated activity required of the predictive analytics required for its continued growth will increase the reliance on machine learning, while its applications in various data preparation and governance tools are equally as vital.

Nonetheless, the chief way in which machine learning will help to shift the focus of big data from sprawling to narrow relates to the fact that it either eschews or hastens human involvement in all of the aforementioned processes, and in many others as well. Forrester predicted that: “Machine learning will replace manual data wrangling and data governance dirty work…The freeing up of time will accelerate the execution of data and analytics strategies, allowing organizations to get to the good stuff, taking actions and driving better business outcomes based on the data.” Machine learning will enable organizations to spend less time managing their data and more time creating action from the insights they provide.

Accelerating data management processes also enables users to spend more time understanding their data. John Rueter, Vice President of Marketing at Cambridge Semantics, denoted the importance of establishing the context and meaning of data. “Everyone is in such a race to collect as much data as they can and store it so they can get to it when they want to, when oftentimes they really aren’t thinking ahead of time about what they want to do with it, and how it is going to be used. The fact of the matter is what’s the point of collecting all this data if you don’t understand it?”

Cloud-Based IT
The trend of outsourcing IT to the cloud is evinced in a number of different ways, from a distributed model of data management to one in which IT resources are more frequently accessed through the cloud. The variation of basic data management services that the enterprise is able to outsource via the cloud (including analytics, integration, computations, CRM, etc.) are revamping typical architectural concerns, which are increasingly involving the cloud. These facts are substantiated by IDC’s predictions that, “By 2018, at least 50 % of IT spending will be cloud based. By 2018, 65 % of all enterprise IT assets will be housed offsite and 33% of IT staff will be employed by third-party, managed service providers.”

The impact of this trend goes beyond merely extending the cloud’s benefits of decreased infrastructure, lower costs, and greater agility. It means that a number of pivotal facets of data management will require less daily manipulating on the part of the enterprise, and that end users can implements the results of those data driven processes quicker and for more specific use cases. Additionally, this trend heralds a fragmentation of the CDO role. The inherent decentralization involved in outsourcing IT functions through the cloud will be reflected in an evolution of this position. The foregoing Forrester post notes that “We will likely see fewer CDOs going forward but more chief analytics officers, or chief data scientists. The role will evolve, not disappear.”

Self-Service Data Science
Data science is another realm in which the other two 2016 trends in big data coalesce. The predominance of machine learning helps to improve the analytical insight gleaned from data science, just as a number of key attributes of this discipline are being outsourced and accessed through the cloud. Those include numerous facets of the analytics process including data discovery, source aggregation, multiple types of analytics and, in some instances, even analysis of the results themselves. As Forrester indicated, “Data science and real-time analytics will collapse the insights time-to-market. The trending of data science and real-time data capture and analytics will continue to close the gaps between data, insight and action. In 2016, Forrester predicts: “A third of firms will pursue data science through outsourcing and technology. Firms will turn to insights services, algorithms markets, and self-service advanced analytics tools, and cognitive computing capabilities, to help fill data science gaps.”

Self-service data science options for analytics encompass myriad forms, from providers that provision graph analytics, Machine Learning-as-a-Service, and various forms of cognitive computing. The burgeoning algorithms market is a vital aspect of this automation of data science, and enables companies to leverage previously existent algorithms with their own data. Some algorithms are stratified according to use cases for data according to business unit or vertical industry. Similarly, Machine Learning-as-a-Service options provide excellent starting points for organizations to simply add their data and reap predictive analytics capabilities.

Targeting Use Cases to Shrink Big Data
The principal point of commonality between all of these trends is the furthering of the self-service movement and the ability it gives end users to hone in on the uses of data, as opposed to merely focusing on the data itself and its management. The ramifications are that organizations and individual users will be able to tailor and target their big data deployments for individualized use cases, creating more value at the departmental and intradepartmental levels…and for the enterprise as a whole. The facilitation of small applications and uses of big data will justify this technology’s dominance of the data landscape.

Source: 2016 Trends in Big Data: Insights and Action Turn Big Data Small

The Big Data Game-Changer: Public Data and Semantic Graph Databases

By Dr. Jans Aasman, Ph.D, CEO of Franz Inc.

Big Data’s influence across the data landscape is well known, and virtually undeniable. Organizations are adopting a greater diversity of sources and data structures in quantities that are rapidly increasing while they want the results of analytics faster and faster.

Of great importance is also how big data’s influence is shaping that landscape. Gartner asserted, “The number and variety of public-facing open datasets and Web APIs published by all tiers of governments worldwide continues to increase.” The inclusion of the growing variety of public data sources shows that big data is actually also big public data.

The key is to expeditiously integrate that data—in a well-governed, sustainable manner—with proprietary enterprise data for timely analytic action. Semantic graph database technology is built to facilitate data integration and as such surpasses virtually every other method for leveraging public data. The recent explosion of public sources of big data is effectively dictating the need for semantic graph databases.

The Smart Data Approach
More than any other type of analytics, public big data analysis and integration comprehensively utilizes the self-describing, smart data technologies on which semantic graph databases hinge. The exorbitant volumes and velocities of big data benefit from this intrinsic understanding of specific data elements that are expressed in semantic statements known as triples. But it’s the growing variety of data types included in integrating public and private big data sources that exploit this self-identifying penchant of semantic data—especially when linking disparate data sets.

This facet of smart data proves invaluable when modeling and integrating structured and unstructured (public) data during the analytic preparation process. The same methods by which proprietary data are modeled can be used to incorporate public data sources in a uniform way. When integrating unstructured or semi-structured public data with structured data for fraud detection, hedge fund analysis or other use cases, semantic graph databases’ propensity to readily glean the meaning of data and relationship between data elements is critical to immediate responses.

Triple Intelligence
Triple stores are integral to incorporating public big data with internal company sources because they provide a form of machine intelligence that is essential to expanding the understanding of how data relates to each other. Every semantic statement provides meaning about data. Triple stores utilize these statements as the basis for providing further inferences about the way that data interrelates.

For example, say the enterprise data warehouse of a hospital has data about a patient that will be expressed in triples like: patient X takes Drug Aspirin and patient X takes Drug Insulase. A publicly available medical drug database will have triples such as:  Chlorpropamide has the brand name Insulase and ChrolPropamide has Drug Interaction with Aspirin. The reasoning in the triple stores will instantly conclude that Patient X has a problem.

Such an example illustrates the usefulness of triple stores when contextualizing public big data integrated with internal data. Firstly, this type of basic inferencing is not possible with other technologies, including both relational and graph databases that do not involve semantics. The latter are focused on the graph’s nodes and their properties; semantic graph databases focus on the relationships between nodes (the edges). Furthermore, such intelligent inferencing illustrates the fact that these stores can actually learn. Finally, such inferencing is invaluable when leveraged at scale and accounting for the numerous subtleties existent between big data, and is another way of deriving meaning from data in low latency production environments.

Public Big Data
Much of the value that public big data delivers pertains to general knowledge generated by researchers, scientists and data analysts from the government. By integrating this knowledge with big data within the enterprise we can build new applications that benefit the enterprise and society.

Dr. Jans Aasman, Ph.d is the CEO of Franz Inc., an early innovator in Artificial Intelligence and leading supplier of Semantic Graph Database technology.

Source: The Big Data Game-Changer: Public Data and Semantic Graph Databases by jaasman

February 27, 2017 Health and Biotech analytics news roundup

The latest news and commentary on healthcare analytics:

IBM Debuts Watson Imaging Clinical Review, the First Cognitive Imaging Offering: The tool will first help identify a common cardiovascular disease (aortic stenosis) by combining imaging data with other data sources. IBM hopes to expand the tool to identify other cardiovascular conditions.

New gene sequencing tool could aid in early detection, treatment of cancer: Researchers at Johns Hopkins, the University of Toronto, and the Ontario Institute for Cancer Research have developed a method for directly detecting some DNA sequence modifications.

Wearing your brain on your sleeve: Rhoda Au, a scientist at Boston University, is collecting day-by-day data about how Alzheimer’s and other dementias progress.

Cancer vs. the machine: how to personalise treatment using computing power: Microsoft’s Jasmin Fisher is using simulations of the cell to help advance drug discovery.

Source