How to Win Business using Marketing Data [infographics]

How to Win Business using Marketing Data
How to Win Business using Marketing Data

A marketer’s job is to win the hearts and minds of customers and prospects. Even though priority is to clearly accounts for the intricacies of customer’s intellects and emotions, often major of it is in using intellectual triggers and minor of it is in connecting with them emotionally. It has consistently been overlooked. The fact that, when wielded correctly, emotion is a much more potent persuasive force in forging connections than intellect. Following infographic explains how various channels are being utilized and how they are acting to make the business successful.

Source

A Super Scary Halloween Story for Data Scientists and other Change Agents

AAEAAQAAAAAAAAMxAAAAJGZkMjE2NDI1LTk5MzItNGE1NC1hZTUyLTNmZjQ5ZjMzNTBlOA

As the mist swirled and parted, the Seer suddenly appeared before me. He was barely recognizable as human. His skin was deeply etched from a hundred winters spent roaming the barren peaks and crags of Mount Olympus. He regarded me with his one good eye, and croaked words that chilled the air around us… “Hey, wassup? What can I do you for? And make it snappy, ’cause I’ve got baklava in the oven. You know how easy it is to burn baklava?”

“Oh, Seer, I’ve come a long way to ask you something that’s been troubling my soul for a very long time. I’ve been involved in many projects where a centralized Data Science team tries to help internal customers from a business unit leverage analytics in some way. I’ve seen mixed results. What’s the secret to a successful analytics engagement with internal business customers? How can I get them to cooperate with me, to take my findings seriously, and above all to actually implement changes to business processes based on my findings?

The Seer smiled. Or was it a sneer? “Can you handle the truth?” he said.

That sounded vaguely familiar… “Didn’t Jack Nicholson say that in…”

“Jack got it from me!” the Seer snapped. “Can you handle the truth?”

“I… think so…”, I sputtered. “Yes! Give it to me straight!”

The contempt-o-meter

“Here’s the thing.” said the Seer, sniffing the air for the slightest hint of burning baklava. “As soon as you start having feelings of contempt for your internal customers – let’s call them your clients for short – you’re done. Cash in your chips. Go home. It’s over. Humans are exquisitely fine-tuned to sense the feelings of others. It’s simply impossible to hide your feelings of contempt from your clients. And nobody wants to work with some pretentious jackass who they sense is always looking down on them .”

“Well, no problem there!” I preened. “I pride myself on always being professional and respectful towards my clients.”

Oh, really?” he said, his smile-sneer growing wider.

“Clients can sense how you feel about them. Ever sat in a meeting with your clients, and thought to yourself that you’re surrounded by mouth breathing knuckle draggers?”

“Well… ”

“They can sense how you feel about them. Ever complained to your colleagues about how woefully misguided your clients are?”

“I…. ”

“They can sense how you feel about them. Ever… ”

“Okay, okay, I get it. I can see how thinking about my clients like that is not going to win me their cooperation or help me be an effective Data Scientist, which is all about changing a company’s behavior in some way, whether big or small. But exactly how can I silence my judgmental internal dialog?”

First things first

“Build empathy for your clients. Empathy… A good, solid Greek word if there ever was one. And empathy in the context of cross-organizational relationships is not about moral virtue. It’s about getting things done that are good for the business, and good for your career at the same time. Oh, and you can’t fake empathy. The contempt radar that I mentioned earlier? Yeah, it also detects insincerity.”

A piece of the Seer’s ear fell off, but it didn’t seem to bother him.

“But before we get into my specific advice for increasing your empathy for internal customers,” he said, “you have to be honest with yourself. Do you really giving a rat’s ass about their happiness? Really? If yes, then continue. If not, then go back to raising your goats, or practicing your lute, or weaving your baskets from found human hair to sell on Etsy. Building empathy is very hard work. But it’s also the only path I’ve found to delivering happiness to internal customers, which turns out to be the golden road to effectiveness as you’ve defined it. So, do you have a heartfelt desire to make your clients happy?”

I nodded.

You’re wrong. You’re just wrong.

“What if I told you that everything you know is a lie?” quizzed the Seer. I was expecting him to launch into the whole red-pill blue-pill thing, but he skipped it.

“Your contempt for your internal customer is built on your perceptions. You think you know all about your client. But your mind is endlessly filling-in knowledge gaps with fantasy. Your mind constructs your perceptions out of teensy bits of reality plus huge doses of stereotypes and random gastric disturbances. Think about how often in the past you’ve misread people and situations, and you’ll realize that much of what you think you know about your client is probably just plain wrong.”

Beginner’s mind

The Seer took another whiff of air, like an Irish Setter on the scent.

“Here’s one idea for cultivating authentic empathy. Two mountains over there are these Buddhists. They’re an absolute riot at my fondue parties, by the way… Anyhow, they talk about the importance of having a beginner’s mind. It means that, regardless of how many decades you’ve been practicing meditation, you should approach each new meditation session as if it were your first time meditating. You should approach it with anticipation, curiosity, and an openness to being surprised.”

“What if you took the same kind of approach to your internal customers?”

On not peeing into the wind

“For example: What if you took them to lunch, and asked them questions that helped you to deeply understand what makes them tick at work:

1) What brings them joy in their job?
2) What brings them dread?
3) What are their career aspirations?
4) What makes their boss praise them?
5) What makes their boss yell at them?
6) What must they do to achieve their job goals (and hence their bonus)?

“You were expecting project-related questions, right?” the Seer sneer-smiled.

“The truth is, nobody gives a sh*t about your equations and graphs, per se. But they deeply give a sh*t about how your equations and graphs might impact them along those personal dimensions, both positively and negatively. They’ll never say so, but they do. They might not even consciously realize it, but they do.”

“So, what I’m saying is, as you think about how you are going to get your Brilliant Data Science Idea implemented, deconstruct those personal dimensions of your clients, and then explain the benefits of your Brilliant Data Science Idea to them in ways that address those personal dimensions.”

“But isn’t that manipulative?, you ask. Only if it’s done with malice, I answer.”

“Here’s an analogy. You are out sailing on the wine dark sea, and you want to get your little boat from point A to point B, because you’ve heard that the feta is amazing at point B. Isn’t it wise to consider where the winds are blowing, and where the shoals are lurking, and to get aligned with the great forces of nature, rather than be willfully ignorant of them?  Isn’t it better to leverage those forces, rather than to fight them? Where is the manipulation and malice in that?”

“Look, I don’t expect you to just take my word for it. Try it for yourself. Experiment with it. Play with it. Then come back and tell me how it went.”

Baklava’s done

The sweet smell of freshly baked baklava was now competing with the Seer’s formidable stench. “I love the smell of baklava in the morning!” said the Seer.

“Thanks for the advice,” I said. But it sounds like very hard work. Interpersonal skills… Change management strategies…. These are not exactly part of the standard Data Science repertoire.”

“True dat,” said the Seer, winking at me with his one good eye.

“But luckily you don’t have to be perfect at it. Because you know what they say… In the land of the blind, the one-eyed man is…”

“… king!” I answered.

And with that, the Seer was gone.

Please feel free to contact me via LinkedIn

Source: A Super Scary Halloween Story for Data Scientists and other Change Agents by groumeliotis

Who Is Your ‘Biggest Fan’ on Facebook? Navigating the Facebook Graph API ~ 2016 Tutorial

Before we begin, here’s a working example of this quick Facebook app on my Github 🙂

facebook-api1

There are a few (or a lot, depending on your excitement) cool things you can do with the Facebook Graph API.

First of all, what is the Graph API?

  1. In short, it’s our way of getting Facebook goodies like posts, pictures, status updates, friends list, all that good stuff. We can also post data (AKA update our status) using the Graph API.

For this mini blog tutorial, I’m going to cover the getting part.

In particular, I’ll be demonstrating how to find:

  1. Your most liked posts
  2. The friends who most like your posts (your biggest fans)

I’ll be using JavaScript and Python for this tutorial. No worries if these languages aren’t your go-to; the concepts I cover in this tutorial are constant across all languages.

Let’s roll.

1. Boilerplate (boring stuff) out of the way

First, let’s set up a very simple JavaScript SDK so we can talk to Facebook using JavaScript.

I didn’t want to waste precious code space with boilerplate code, so check it out my Github

2. Basic GET request

Let’s do a basic GET request. Let’s get all my posts, messages, stories along with the likes and comments associated with each post.

function getPosts() {
   FB.api('me/posts/?fields=comments.summary(true),likes.summary(true).fields(name), message,  story',
   function(response) {
       passPosts(JSON.stringify(response))
   });
} 

Ignore the passPosts() function for now

This returns a JSON response as such:

{
 "data": [
     {
      "message": "something about the bao",
      "story": "Nikhil Bhaskar updated his profile picture.",
      "id": "[id of post]",
      "likes": {
        "data": [
          {
            "id": "[id of liker]"
            "name": "[name of liker]"
          },
         ],
       "summary": {
          "total_count": [num of likes],
          "can_like": true,
          "has_liked": false
        }
      },
      ....{}, {}, {}
     ],
    "paging": {
      "previous": "[url]",
      "next": "[url]"
     }
 }

Notice how our result has been paginated. In other words, to actually get all of our posts, we need to run an API call again, on the “paging”:”next”: url.

We should avoid multiple API calls whenever necessary, so let’s slightly modify our basic GET request.

'me/posts/?limit=5000&fields=comments.summary(true),likes.summary(true).fields(name), message,  story'

Notice how now, we include a limit of 5000 posts. It seems to me that this is the max limit we can set (I’m not sure; it was more trial & error here). This way, we get as many posts as we can in one API call. Consequently, we greatly reduce the number of API calls we make.

Learner’s check:

  1. Our ‘response’ object is a JavaScript object. In order for us to easily pass it around we convert it to a string with JSON.stringify()

3. AJAX call to Python

Let’s pass our JSON response to our Python backend so we can further process it.

function passPosts(userPosts){
        $.ajax({
          method: "POST",
          url: "/fb_login/",
          data: { 
            "user_posts": userPosts
            }
        })
        .success(function(data) {
          //handle results
        });
      }

Learner’s check

  1. We call our AJAX function (POST) to the url route ‘fb_login’, with our userPosts

4. Python ~ Get the AJAX POST data

Side note: I am using Django. You can use whatever framework (or no framework) you want

In views.py, let’s get our Facebook API response:

def fb_login(request):
	if request.method == 'POST':

                all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		
               '''Ignore rest of function for now
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		my_most_liked_post = sort_posts_dict_by_likes(all_posts_dict)[0]
		print my_most_liked_post
                '''
	return render(request, 'talentur/fb_login.html')

What does ‘get_all_posts_dict(arg)’ do?

def get_all_posts_dict(response):
	return tornado_all_posts_dict(string_to_dict(response))

As you can see, it calls 2 functions. So it does 2 tasks:

  1. Convert our response to a Python dictionary
  2. Call a function on this dictionary to get all of our posts (remember, the JSON response we got was paginated)

Here, we achieve task 1 with our string_to_dict function:

def string_to_dict(json_string):
    return json.loads(json_string)

And here, we call a recursive function tornado_all_posts_dict to achieve task 2:

def tornado_all_posts_dict(response_dict, master_posts = None ):
	master_posts = {'data':[]} if master_posts is None else master_posts
	posts = response_dict['data']
	master_posts['data'] = master_posts['data'] + posts
	if 'paging' in response_dict and 'next' in response_dict['paging']:
		r = requests.get(response_dict['paging']['next']).json()
		tornado_all_posts_dict(r, master_posts)
	return master_posts

5. Find your most liked posts

We gotta do a little clean up, first. As you’ve noticed, the all_posts_dict is a Python dictionary with a “data” property.

“data” is a list of several dictionaries. Each dictionary in “data” is basically a post/message/story, etc. The problem is that some of these dictionaries don’t have a “likes” property.

Example:

{
      "id": "[id]"
}, ...

These are probably just occurrences when you change your cover photo to a photo you’ve already used before, for example. Although there are “likes” associated with your cover photo, there are no “likes” associated with the act of updating your cover photo back to this old picture. Make sense?

So, let’s remove all the dictionaries in “data” that have no “likes” property

def remove_dicts_from_list_based_on_key(response_dict, key):
	the_list = response_dict['data']
	return [dicti for dicti in the_list if key in dicti]

So, in views.py, in def fb_login function, add:

all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

Now, we can sort our all_posts_dict by “likes”:

def sort_posts_dict_by_likes(response_dict):
	list_of_user_post_objects = response_dict['data']
	list_of_user_post_objects = sorted(list_of_user_post_objects, key=lambda k: -k['likes']['summary']['total_count']) 
	return list_of_user_post_objects

Learner’s check

  1. “likes” has a “summary” property, which in turn has a “total_count property”
  2. “total_count” is the number we care about here
  3. -k because are sorting in descending order

Now, in views.py, the def fb_login function should look like this:

def fb_login(request):
	if request.method == 'POST':

		all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		my_most_liked_post = sort_posts_dict_by_likes(all_posts_dict)[0]
		print my_most_liked_post
		
	return render(request, 'talentur/fb_login.html')

Our response:

{u'message': u'AHAHAHAHHA', u'id': u'1090366184360236_310878925642303', u'comments': {u'data': [], u'summary': {u'total_count': 171, u'can_comment': True, u'order': u'chronological'}}, u'likes': {u'data': [], u'summary': {u'total_count': 1643, u'has_liked': False, u'can_like': True}}}

This was a post I shared a long time ago. Got over a 1000 likes, haha

65793_310878908975638_1907833459_n

Obviously, our actual result is just a Python dictionary. But you can use its id to get everything associated with this post.

Let’s move on..

6. Find your biggest fans

First, let’s put every single friend who liked your posts into a list of tuples. This tuple will contain the id and name of your friend

def liker_ids_tornado(response, like_ids_list = None):
	like_ids_list = [] if like_ids_list is None else like_ids_list
	data_list = response['data']

	for post_message_story in data_list:
		if 'likes' in post_message_story:
			for liker in post_message_story['likes']['data']:
				like_ids_list.append((liker['id'], liker['name']))
	if 'paging' in response and 'next' in response['paging']:
		r = requests.get(response['paging']['next']).json()
		liker_ids_tornado(r, like_ids_list)
	return like_ids_list

Now, let’s use the convenient Counter() function from the ‘collections’ module

def get_most_likers(like_ids_list):
	id_results_dict = Counter(like_ids_list)
	return id_results_dict

Here’s what our def fb_login function looks like now:

def fb_login(request):
	if request.method == 'POST':

		all_posts_dict = get_all_posts_dict(request.POST['user_posts'])
		all_posts_dict['data'] = remove_dicts_from_list_based_on_key(all_posts_dict, 'likes')

		like_ids_list = liker_ids_tornado(all_posts_dict)
		my_biggest_fans = get_most_likers(like_ids_list)
		print my_biggest_fans

	return render(request, 'talentur/fb_login.html')

Our response:

Counter({(u'id', u'Name'): 101, (u'id2', u'Name2'):97...}) 

The result is a Counter, which is a subclass of a Dictionary. So, my ‘biggest fan’ (who I won’t disclose here) has given me a total of 101 likes.

There you have it. A little Facebook insight for ya.

Enjoy 🙂

Once again, here’s a working example of this quick Facebook app on my Github 🙂

Originally Posted at: Who Is Your ‘Biggest Fan’ on Facebook? Navigating the Facebook Graph API ~ 2016 Tutorial

2016 Trends in Big Data: Insights and Action Turn Big Data Small

Big data’s salience throughout the contemporary data sphere is all but solidified. Gartner indicates its technologies are embedded within numerous facets of data management, from conventional analytics to sophisticated data science issues.

Consequently, expectations for big data will shift this year. It is no longer sufficient to justify big data deployments by emphasizing the amount and sundry of types of data these technologies ingest, but rather the specific business value they create by offering targeted applications and use cases providing, ideally, quantifiable results.

The shift in big data expectations, then, will go from big to small. That transformation in the perception and deployments of big data will be spearheaded by numerous aspects of data management, from the evolving roles of Chief Data Officers to developments in the Internet of Things. Still, the most notable trends impacting big data will inevitably pertain to the different aspects of:

• Ubiquitous Machine Learning: Machine learning will prove one of the most valuable technologies for reducing time to insight and action for big data. Its propensity for generating future algorithms based on the demonstrated use and practicality of current ones can improve analytics and the value it yields. It can also expedite numerous preparation processes related to data integration, cleansing, transformation and others, while smoothing data governance implementation.
• Cloud-Based IT Outsourcing: The cloud benefits of scale, cost, and storage will alter big data initiatives by transforming IT departments. The new paradigm for this organizational function will involve a hybridized architecture in which all but the most vital and longstanding systems are outsourced to complement existing infrastructure.
• Data Science for Hire: Whereas some of the more complicated aspects of data science (tailoring solutions to specific business processes) will remain tenuous, numerous aspects of this discipline have become automated and accelerated. The emergence of a market for algorithms, Machine Learning-as-a-Service, and self-service data discovery and management tools will spur this trend.

From Machine Learning to Artificial Intelligence
The correlation between these three trends is probably typified by the increasing prevalence of machine learning, which is an integral part of many of the analytics functions that IT departments are outsourcing and aspects of data science that have become automated. The expectations for machine learning will truly blossom this year, with Gartner offering numerous predictions for the end of the decade in which elements of artificial intelligence are normative parts of daily business activities. The projected expansion of the IoT and the automated activity required of the predictive analytics required for its continued growth will increase the reliance on machine learning, while its applications in various data preparation and governance tools are equally as vital.

Nonetheless, the chief way in which machine learning will help to shift the focus of big data from sprawling to narrow relates to the fact that it either eschews or hastens human involvement in all of the aforementioned processes, and in many others as well. Forrester predicted that: “Machine learning will replace manual data wrangling and data governance dirty work…The freeing up of time will accelerate the execution of data and analytics strategies, allowing organizations to get to the good stuff, taking actions and driving better business outcomes based on the data.” Machine learning will enable organizations to spend less time managing their data and more time creating action from the insights they provide.

Accelerating data management processes also enables users to spend more time understanding their data. John Rueter, Vice President of Marketing at Cambridge Semantics, denoted the importance of establishing the context and meaning of data. “Everyone is in such a race to collect as much data as they can and store it so they can get to it when they want to, when oftentimes they really aren’t thinking ahead of time about what they want to do with it, and how it is going to be used. The fact of the matter is what’s the point of collecting all this data if you don’t understand it?”

Cloud-Based IT
The trend of outsourcing IT to the cloud is evinced in a number of different ways, from a distributed model of data management to one in which IT resources are more frequently accessed through the cloud. The variation of basic data management services that the enterprise is able to outsource via the cloud (including analytics, integration, computations, CRM, etc.) are revamping typical architectural concerns, which are increasingly involving the cloud. These facts are substantiated by IDC’s predictions that, “By 2018, at least 50 % of IT spending will be cloud based. By 2018, 65 % of all enterprise IT assets will be housed offsite and 33% of IT staff will be employed by third-party, managed service providers.”

The impact of this trend goes beyond merely extending the cloud’s benefits of decreased infrastructure, lower costs, and greater agility. It means that a number of pivotal facets of data management will require less daily manipulating on the part of the enterprise, and that end users can implements the results of those data driven processes quicker and for more specific use cases. Additionally, this trend heralds a fragmentation of the CDO role. The inherent decentralization involved in outsourcing IT functions through the cloud will be reflected in an evolution of this position. The foregoing Forrester post notes that “We will likely see fewer CDOs going forward but more chief analytics officers, or chief data scientists. The role will evolve, not disappear.”

Self-Service Data Science
Data science is another realm in which the other two 2016 trends in big data coalesce. The predominance of machine learning helps to improve the analytical insight gleaned from data science, just as a number of key attributes of this discipline are being outsourced and accessed through the cloud. Those include numerous facets of the analytics process including data discovery, source aggregation, multiple types of analytics and, in some instances, even analysis of the results themselves. As Forrester indicated, “Data science and real-time analytics will collapse the insights time-to-market. The trending of data science and real-time data capture and analytics will continue to close the gaps between data, insight and action. In 2016, Forrester predicts: “A third of firms will pursue data science through outsourcing and technology. Firms will turn to insights services, algorithms markets, and self-service advanced analytics tools, and cognitive computing capabilities, to help fill data science gaps.”

Self-service data science options for analytics encompass myriad forms, from providers that provision graph analytics, Machine Learning-as-a-Service, and various forms of cognitive computing. The burgeoning algorithms market is a vital aspect of this automation of data science, and enables companies to leverage previously existent algorithms with their own data. Some algorithms are stratified according to use cases for data according to business unit or vertical industry. Similarly, Machine Learning-as-a-Service options provide excellent starting points for organizations to simply add their data and reap predictive analytics capabilities.

Targeting Use Cases to Shrink Big Data
The principal point of commonality between all of these trends is the furthering of the self-service movement and the ability it gives end users to hone in on the uses of data, as opposed to merely focusing on the data itself and its management. The ramifications are that organizations and individual users will be able to tailor and target their big data deployments for individualized use cases, creating more value at the departmental and intradepartmental levels…and for the enterprise as a whole. The facilitation of small applications and uses of big data will justify this technology’s dominance of the data landscape.

Source: 2016 Trends in Big Data: Insights and Action Turn Big Data Small

The Big Data Game-Changer: Public Data and Semantic Graph Databases

By Dr. Jans Aasman, Ph.D, CEO of Franz Inc.

Big Data’s influence across the data landscape is well known, and virtually undeniable. Organizations are adopting a greater diversity of sources and data structures in quantities that are rapidly increasing while they want the results of analytics faster and faster.

Of great importance is also how big data’s influence is shaping that landscape. Gartner asserted, “The number and variety of public-facing open datasets and Web APIs published by all tiers of governments worldwide continues to increase.” The inclusion of the growing variety of public data sources shows that big data is actually also big public data.

The key is to expeditiously integrate that data—in a well-governed, sustainable manner—with proprietary enterprise data for timely analytic action. Semantic graph database technology is built to facilitate data integration and as such surpasses virtually every other method for leveraging public data. The recent explosion of public sources of big data is effectively dictating the need for semantic graph databases.

The Smart Data Approach
More than any other type of analytics, public big data analysis and integration comprehensively utilizes the self-describing, smart data technologies on which semantic graph databases hinge. The exorbitant volumes and velocities of big data benefit from this intrinsic understanding of specific data elements that are expressed in semantic statements known as triples. But it’s the growing variety of data types included in integrating public and private big data sources that exploit this self-identifying penchant of semantic data—especially when linking disparate data sets.

This facet of smart data proves invaluable when modeling and integrating structured and unstructured (public) data during the analytic preparation process. The same methods by which proprietary data are modeled can be used to incorporate public data sources in a uniform way. When integrating unstructured or semi-structured public data with structured data for fraud detection, hedge fund analysis or other use cases, semantic graph databases’ propensity to readily glean the meaning of data and relationship between data elements is critical to immediate responses.

Triple Intelligence
Triple stores are integral to incorporating public big data with internal company sources because they provide a form of machine intelligence that is essential to expanding the understanding of how data relates to each other. Every semantic statement provides meaning about data. Triple stores utilize these statements as the basis for providing further inferences about the way that data interrelates.

For example, say the enterprise data warehouse of a hospital has data about a patient that will be expressed in triples like: patient X takes Drug Aspirin and patient X takes Drug Insulase. A publicly available medical drug database will have triples such as:  Chlorpropamide has the brand name Insulase and ChrolPropamide has Drug Interaction with Aspirin. The reasoning in the triple stores will instantly conclude that Patient X has a problem.

Such an example illustrates the usefulness of triple stores when contextualizing public big data integrated with internal data. Firstly, this type of basic inferencing is not possible with other technologies, including both relational and graph databases that do not involve semantics. The latter are focused on the graph’s nodes and their properties; semantic graph databases focus on the relationships between nodes (the edges). Furthermore, such intelligent inferencing illustrates the fact that these stores can actually learn. Finally, such inferencing is invaluable when leveraged at scale and accounting for the numerous subtleties existent between big data, and is another way of deriving meaning from data in low latency production environments.

Public Big Data
Much of the value that public big data delivers pertains to general knowledge generated by researchers, scientists and data analysts from the government. By integrating this knowledge with big data within the enterprise we can build new applications that benefit the enterprise and society.

Dr. Jans Aasman, Ph.d is the CEO of Franz Inc., an early innovator in Artificial Intelligence and leading supplier of Semantic Graph Database technology.

Source: The Big Data Game-Changer: Public Data and Semantic Graph Databases by jaasman

February 27, 2017 Health and Biotech analytics news roundup

The latest news and commentary on healthcare analytics:

IBM Debuts Watson Imaging Clinical Review, the First Cognitive Imaging Offering: The tool will first help identify a common cardiovascular disease (aortic stenosis) by combining imaging data with other data sources. IBM hopes to expand the tool to identify other cardiovascular conditions.

New gene sequencing tool could aid in early detection, treatment of cancer: Researchers at Johns Hopkins, the University of Toronto, and the Ontario Institute for Cancer Research have developed a method for directly detecting some DNA sequence modifications.

Wearing your brain on your sleeve: Rhoda Au, a scientist at Boston University, is collecting day-by-day data about how Alzheimer’s and other dementias progress.

Cancer vs. the machine: how to personalise treatment using computing power: Microsoft’s Jasmin Fisher is using simulations of the cell to help advance drug discovery.

Source

Oct 12, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Tour of Accounting  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> CMOs’ Journey from Big Data to Big Profits (Infographic) by v1shal

>> Underpinning Enterprise Data Governance with Machine Intelligence by jelaniharper

>> The Practice of Customer Experience Management: Paper for a Tweet by bobehayes

Wanna write? Click Here

[ FEATURED COURSE]

Introduction to Apache Spark

image

Learn the fundamentals and architecture of Apache Spark, the leading cluster-computing framework among professionals…. more

[ FEATURED READ]

The Future of the Professions: How Technology Will Transform the Work of Human Experts

image

This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want … more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:What does NLP stand for?
A: * Interaction with human (natural) and computers languages
* Involves natural language understanding

Major tasks:
– Machine translation
– Question answering: “what’s the capital of Canada?”
– Sentiment analysis: extract subjective information from a set of documents, identify trends or public opinions in the social media

– Information retrieval

Source

[ VIDEO OF THE WEEK]

#DataScience Approach to Reducing #Employee #Attrition

 #DataScience Approach to Reducing #Employee #Attrition

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

I keep saying that the sexy job in the next 10 years will be statisticians. And I’m not kidding. – Hal Varian

[ PODCAST OF THE WEEK]

Understanding Data Analytics in Information Security with @JayJarome, @BitSight

 Understanding Data Analytics in Information Security with @JayJarome, @BitSight

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

As recently as 2009 there were only a handful of big data projects and total industry revenues were under $100 million. By the end of 2012 more than 90 percent of the Fortune 500 will likely have at least some big data initiatives under way.

Sourced from: Analytics.CLUB #WEB Newsletter

Source by admin

What Role Do Startups Play in Fixing Consumer Debt? Essentials You Need to Know!

There was once a time when economy used to be more straightforward. Financial shortcomings were not non-existent, and debts were manageable with pocket-friendly interest rates and considerable repayment time. It allowed people to live their life on their own terms without having to worry about debt payments continually. Since then, much time has passed, and economic crises have become more frequent. Loans are unavoidable and paying them off without emptying one’s savings is almost impossible. In fact, consumer debt remains one of the most significant challenges that today’s world is facing.

Household expenses have increased alarmingly over the last few years, leading to consumer debts:

With America’s total credit card debt continuing to rise at an alarming rate in 2017, financial analysts are saying that an average household that is trying to pay off a loan has a balance of $15,000. In fact, there has been almost 8% increase in credit card debts in 2017 as compared to last year. A recent study has found that on an average, households with debts owe around $130,000 to their loan providers.

It is indeed a matter of grave concern if the average households have to pay a considerable amount, which is almost equal to more than 8% of the entire household’s income, as debt interests. It is to be noted here that one of the main reasons that consumer debts have gone through the roof in recent years is because the cost of living has skyrocketed at an astounding rate. While most households are trying to increase their income every year, the expenses far outweigh the earning in most cases. Moreover, there has been a 50% increase in medical costs, and food and beverage costs have also increased by more than 35% in recent times. All these have happened in the same time frame, burdening people with more expenses for necessary provisions.

The situation is so terrible that at this stage, medical bills are the most prominent cause of bankruptcy in the United States of America. It is quite sad to know that some Americans are under so much financial pressure that they might not be equipped enough to tackle an emergency of a few hundred dollars.

In this bleak situation, some startups are providing people with the ray of hope. Skeptics may say that tech companies are looking for opportunities to profit from this unfortunate financial condition. However, that is not the case! These companies are genuinely trying to help individuals pay off their loans in a systematic manner and are building solutions to address them.

What are the types of loans that these startups are focusing on?

Mortgages – Mortgages top the consumer debt category, and some startups are leveraging consumer data to assess their creditworthiness as well as improve underwriting accuracy. They also offer better rates, lower borrowing costs and even offer extra credit options to qualified buyers.

Medical bills – As mentioned earlier, medical bills are the most significant reason for bankruptcy in the USA. Startups are now connecting error detection algorithms with a network of medical billing specialists to eliminate medical billing errors and overcharges, helping families to save significantly. These solutions can even scan past medical records and detect errors or inaccuracies, and also arrange for reimbursements.

Credit card debt– New companies are giving personal loans at lower interest rates to individuals based on their past and future earning potential. This potential does not depend on factors that impact credit score such as education, hobbies and so on.

Student loans– Some startups are helping students refinance as well as consolidate their loans. They also assist students in chalking out a monthly payment plan that would be in line with their career trajectory.

What is the next batch of startups going to do to address the situation?

While some startups have done a fantastic job in laying the foundation of the solutions that will potentially fix the consumer debt issue, experts think that this is just the beginning. More and more businesses will enter the space, and the next wave of startups will bring in even more user-centric convenience. They will primarily focus on helping individuals be on top of their debt obligation management process. These new solutions will also help one to scale and reshape their lifestyle to avoid getting into an unfavorable financial situation. From doing away with unnecessary subscriptions on credit cards to scaling down the expenditure on a daily or monthly basis – the possibilities are endless for these solutions. How will they do it? Well, with the help of Artificial Intelligence (AI), of course. AI-based solutions can scan invoices and receipts to stay updated about day to day expenses, provide the users with real-time information about their financial condition, optimize payments to ensure that no debt is ever missed and so much more.

Future income streams will play a critical role in managing debts:

One of the main reasons so many people have to put up with debt-related issues is that the interest on the loan builds up faster than you can repay the sum. Even if it may sound idealistic, one must consider that they can use their future income to reduce their current debt. This would solve quite a bit of the problem. Again, there would be cynics who would say that there have been suggestions of this before, so what guarantee is there that this would work this time? In reply, it can be said that this time, the sheer wealth of data can enable AI to accomplish what was previously unachievable. From a predictive analysis of finances to prescriptive suggestions of expenses, savings, and payments, AI can render debt management much more comfortable.

In conclusion, it can be safely stated that technology has empowered individuals with detailed knowledge and better control of their finances, and it has also made it easier to handle debts. The financial management industry has seen newer efficiencies with new products. As startups make collective effort to tackle the issues related to consumer debt, much support needs to be extended to the new entrepreneurs who are just starting out in this sector. Upon success, they can successfully alleviate some of the most significant stress-inducting factors from today’s economy.

Source: What Role Do Startups Play in Fixing Consumer Debt? Essentials You Need to Know!

Upon earning a Business Doctorate in Mid-Career: What it means and what can you do with it? #DataScience, #Doctoral Education Trends

Upon earning a Business Doctorate in Mid-Career: What it means and what can you do with it?

My Doctoral Experience: Part I (of a two-part article).

The purpose of this article is to explain to business professionals the scope of obtaining a doctoral degree; I will briefly discuss the distinct types of doctorates and discuss my experience in a practitioner-based program. Later, in this article, I discuss the benefits of achieving a doctorate and some of the potential career paths.

Introduction and Gratitude:

I am pleased to announce to my network that I recently completed my Doctorate in Marketing with my research concentration in Marketing Analytics. I had the great fortune of being able to earn a Doctorate in Business while working, which made the trials and tribulations of such a venture extremely challenging. The quintessential achievement of a doctorate means that you have mastered a subject entirely. With that said, I am forever grateful to Pace University for creating a program that is extremely rigorous and AACSB accredited and well rounded. Also, I had the great honor to have two very famous authors in analytics and loyalty marketing on my Dissertation committee: Dr. Tom Davenport and Dr. Terry Vavra. I am incredibly grateful for their participation on my committee. I highly recommend all the following practitioner books for those interested in analytics, CRM, digital marketing and customer satisfaction and engagement:

Top Books by Thomas Davenport

Competing on Analytics(Revised)

Analytics at Work

Only Humans Needs Apply

Enterprise Analytics

Big Data at Work

Keeping Up with The Quants

Top Books by Terry Vavra

The Customer Delight Principle

After Marketing

Loyalty Myths

Improving your measurement of Customer Sat

Customer Satisfaction Measurement Simplified.

Not being that familiar with doctoral programs, early on, I learned that earning a Doctorate in NYC adds to the competitiveness of the programs and the caliber of the students and faculty that you meet along the way. I benefited because many of the professors at Pace have their degrees from top schools and there is a level of competition for students between these Manhattan-based schools Pace, CUNY, Fordham and a variety of others. A doctoral program is about 20 courses or 60 credits. The cost of Doctoral programs can run from $75k to $150k depending on the program. The time to obtain a professional doctorate can range from 3 years to 7 years maximum. I finished in approximately five years. It is important to point out that while a majority of doctoral students complete their coursework, a smaller percentage finish the dissertation, mostly because of the motivation and self-discipline required to drive the writing of a manuscript mostly by yourself. So, for folks who hold a doctoral degree either Ph.D. or D.B.A or D.P.S. the level of rigor, research discipline and significant statistical knowledge that is acquired is a benefit and a significant differentiator over M.B.A. or M.S. holders who have become highly commoditized as of late. While many practitioner roles in marketing and analytics do not require more than MBA, I would encourage prospective employers to look at the level of statistical knowledge and evidence-based process that doctoral candidates learn about and how it may be beneficial for specific roles like the Chief Data Scientist or other quantitative and technical roles.

Alphabet Soup: Ph.D. (Research-Based Doctorate/Academic) versus D.B.A./D.P.S. (Practitioner or Executive Based on a Research Discipline).

Very often I am asked is one type of doctorate; academic versus practitioner better than the other and my point of view is it depends on what you are planning to do with it once achieved. When one is evaluating which path to choose it is vital to understand or try and forecast how you might use the doctorate when you finish. If your lifelong goal is to pursue a tenure-track academic career, then a research-based doctorate, like the Pace program, has some advantages over a purely practitioner doctorate. The fact is there is not a consensus in 2018 which one is better because it depends on many things, including what you are going to do with it. Most Ph.D. programs seek to prepare candidates for a career in academia to educate students and conduct research.

Career Paths after completing the Doctorate:

Some schools hire traditional tenure-track academicians, clinical professors (non-tenure track but usually former practitioners) and adjunct faculty (part-time faculty with full-time professional careers) solely based on holding a doctorate. Which path a student wants to take will often dictate the type of doctorate one will pursue. One thing to point out is that there is a wide variance in both Ph.D. and DBA/DPS degrees and much overlap, so some Ph.D. programs look more like DBA/DPS programs, and some DBA/DPS programs look more like Ph.D. programs.

Some professional doctoral programs such as DBA/DPS take a broader scope and breadth of their curriculum, for example, having students take courses in all aspects of the business from management, to finance(somewhat like an MBA and Ph.D. program combined). Said differently some professional doctorates take a multi-disciplinary approach that gives students a flavor of theory and research in a variety of functional areas while others allow them to specialize in one area. With this said, there had been much debate in practitioner and academic circles which program type is better, a purely research-based Ph.D. or an executive doctorate such as a DBA/DPS. To provide more clarification, it is essential to point out that some doctorates focus on research to create new knowledge while others on research to solve organizational problems by applying theory to practice. The former is more likely to focus on quantitative research methodologies which the latter are more likely to focus on qualitative research methodologies. Again, not all PhDs are one and all professional doctorates the other. One developing trend that is happening now is a merging of Professional Doctorates such as the DBA/DPS and Ph.D.’s in that Business doctorates whether professional or academic both now require original research and a full dissertation which is based on a research study written in a manuscript form.

AACSB Accreditation is more critical that the School Brand

What is most important in choosing a doctoral program, believe it or not, is not necessarily the schools brand name but whether the school is professionally accredited by the Premier Accrediting body which in the case of business is the AACSB, which Pace, CUNY, NYU, Fordham, and Columbia all have. Why does having the premier accreditation like the AACSB matter one might ask? It ensures a substantial level of rigor to the program and makes them somewhat more comparable across the board by creating standards for doctoral education. I also learned that accreditation could create a significant barrier to entry for those desiring to teach at an accredited university. For example, AACSB requires a certain percentage of faculty to have earned doctoral degrees and be productive scholars. This discourages schools from hiring full-time faculty who do not meet this requirement. Most Ph.D. programs and some professional doctorates require students to demonstrate the ability to conduct research and the skill to use a variety of multivariate statistical methods.

Components of any Doctoral Program

The goal of the students pursuing a doctorate is becoming a bonafide expert in a field. Doctoral programs require them to think about the world quite differently and in the framework of theory and research. Most doctoral programs have at their core the following elements.

– Reading and understanding the theory to become expert in one or more academic disciplines.

– Research methods courses that focus on research design, measurement, and evaluation.

– Advanced statistics courses that develop skill in applying techniques like analysis of variance, regression and correlation analysis, factor analysis, cluster analysis, and structural equation modeling.

– Conducting research and writing many papers in a chosen field (such as marketing). Some require developing an article for acceptance at academic conferences or publication in scholarly journals.

– Creating and defending a dissertation, which is similar to writing a treatise. Depending on the program, it can either develop and test theory to create new knowledge or apply theory and analytics to investigate and solve a practitioner problem. It demonstrates the expertise a student develops throughout the doctoral program and provides a platform for future interest and study.

Doctoral Program Requirements: My experience from Pace’s Executive Doctorate.

Ok, now the next question, what does one study in a Business Doctoral program, regardless of the assorted flavors and program names. Here are some of the elements and I would argue are quite common in academic doctorates and practitioner doctorates (PhDs, DBAs/DPS).

#1) All programs have a massive research component. The program at Pace had me reading, analyzing and critiquing over 150 journal articles in my chosen major/research area, which was Marketing and Analytics. These critiques were due every week, sometimes twice a week, so it was not unusual to be up after midnight to submit them online. In addition, to reading over 150 journal articles (and this was the kind of reading that could cure insomnia, for example very esoteric topics and very high-level concepts in addition to very detailed methodologies and testing results). The doctoral student must write between 2-3 research papers in every class they take during their coursework, which included reading between 15-30 articles to understand the theoretical basis for any article (Not light reading, the kind of reading you have to re-read like 3-5x to make sure you understood what the heck they were saying in some cases).

#2) Since a dissertation is in effect like writing a book it helps you build and demonstrate competency in your chosen filed. By building competence or expertise in a topical area and then identifying the dissertation, you can prove and defend that expertise. Mine was Marketing Analytics.

#3) Development of critical thinking through reading, summarizing and discussing the scholarly literature. Reviewing the literature enabled us to better understand research methodology and, we were required to comment on testing approach as well as having a point of view on the analysis.

#4) All reasonable programs prepare you for publishing research results in peer-reviewed, scholarly journals. As most good programs are emphasizing research, Pace’s program required a one-year publishing tutorial where the student is required to submit a paper to a national conference and ultimately a scholarly journal. This included co-authoring an article with a published author and learning SPSS, AMOS, SEM and Factor analysis and other supporting statistical analysis in excruciating detail including programming these tools oneself. I had the good fortune of having my paper accepted at a National Conference right out of the gate. This demonstrates the rigor and competitiveness of a doctoral program.

#5) Comprehensive Exam (Includes and Written and Oral Exam): A written – 6-hour exam on over 150 articles in your discipline (mine was marketing) and an Oral Exam: A 3-hour cross-examination (by two tenured faculties). The purpose of the examination is to certify your competence and currency in an academic discipline.

#6) You are invited to form your committee with 5 Ph.D.’s (3 internal to your school, 2 External Experts)

#7) Dissertation Proposal Approval: 3 hours and 3 Chapters of your manuscript. Intro, Lit Review, Methodology Proposal. Approval to collect data. The format is a 3-hour in-person presentation with the five members of your committee. Your committee provides you with valuable feedback and suggests alternative ways of approaching your methodology.

#8) Dissertation Final Defense Meeting: 3 hours with your committee. The entire manuscript including the Analysis and Results and Discussion, Directions for future research, implications for theory and practice. 150-200 page paper(“book”). Depending on the methodology may have dozens of analyses: factor analysis, ANOVA, mediation and moderation tests that all lead to proving a structural model and measurement model. Takes about a year or 2 to write this.

#9) If you are still alive after all this you get a piece of paper that says you are a Dr. This is a mammoth life-altering task and is the pinnacle of academic achievement.

The following are the course that one takes over the five years to complete the doctorate.

Coursework by Year

Year 1

Fall

Elective

Doctoral Foundation Seminar in Management

Explorations in Business Research

Spring

Elective

Doctoral Foundation Seminar in Finance and Economics

Doctoral Foundation Seminar in Marketing

Year 2

Autumn

Regression Analysis

Publishing Tutorial 1

Spring

Elective

Selected Topics in Multivariate Analysis

Publishing Tutorial 2

Year 3

Autumn

Concentration Seminar

Research Design and Measurement

Doctoral Concentration Seminar in

Corporate Finance (FIN 821) or

Consumer Research (MAR 831) or

Organization Behavior (MGT 835)

Spring

Concentration Seminar

Doctoral Foundation Seminar in Cross Cultural Management

Doctoral Concentration Seminar in

Capital Markets (FIN 822) or

Marketing Management (MAR 832) or

Strategic Management (MGT 836)

Year 4

Autumn

Dissertation Seminar 1

Candidate passes doctoral comprehensive examination

Doctoral Program appoints a dissertation committee

Spring

Dissertation Seminar 2

Candidate presents dissertation proposal to committee for approval

Year 5

Autumn

Dissertation Seminar 3

Candidate collects and analyzes empirical data

Spring

Dissertation Seminar 3

Candidate completes and defends a dissertation

Explaining the Value of a Doctorate:

Doctoral Degree benefits for careers in Academia or as an academic practitioner.

The doctorate brings more credibility to academic research. The credibility is clearly based on the fact that research doctorates inculcate the knowledge and skill necessary to conduct research worth publishing. While not the primary intent a doctorate may lend credibility to consumers of professional books, if they see that the book is written in a manner that is well researched by a doctor. The doctorate does distinguish you from your colleagues as it is a rare degree in the U.S. The next benefit is to acquire research skills and to be competent observers of the world and doing analysis and applying them to practical problems. It does remain to be seen if access to jobs is increased, but in data science and specific fields, doctoral marketing holders are often preferred given the statistical training of doctors.

How does a Doctorate Help a Consulting Practice?

While the doctorate helps with providing expanded opportunities in teaching, it also allows autonomy to do consulting. Having a doctorate lends credibility to a consulting practice or in my case a Chief Analytics role or a CMO role in a corporation. While not required, very often clients want to send their PhDs to talk with you (doctoral holder) in the event you are working on any projects that are methodology driven so that they can review the work and ensure senior management that the consultant’s practice is sound. Clients and firms have a higher regard for the consultant if they know they have a doctorate as they very often understand the rigor of going through such a program and they know that they can expect detailed research and analysis and often a more value-added focus.

My Dissertation, Articles, Plans for Future Research

In conclusion, I will briefly discuss my dissertation topic, the expertise I developed and my research plans. My dissertation topic was regarding Marketing Analytics. Marketing Analytics is a relatively new but increasingly prominent field in which data tools are applied to quantify and monitor marketing performance and customer information to optimize investments in marketing programs and maximizing customer interaction. My dissertation is a B2B study, in which I established a set of predictors that help determine the degree to which a firm’s marketing function is analytically driven. The research builds on extant theories of market orientation by establishing the presence of a new construct known as Marketing Analytics Orientation (MAO) through qualitative and quantitative research methods including factor analysis and structured equation modeling. Firms in the study are scored on the MAO Index, and the characteristics of the more analytical firms are discussed. Furthermore, the study explores the relationship between the factors that comprise Marketing Analytics Orientation (MAO) and Marketing Performance (MP). I am in the process of publishing various parts of this study in scholarly journals. The study provides the opportunity for future research into the drivers of how analytical marketing organizations are and the impact analytics has on marketing performance and other campaign key performance indicators and market level performance indicators such as company stock price.

Please check back here if you are interested in reading my Dissertation on the adoption of Marketing Analytics or other related articles. This will take some time to appear as I am working on publishing now.

I hope this addresses the universal questions of why pursue a doctorate and its’ potential benefits aligned to several career paths.

In future articles and Blogs, I will explore ways in which Industry and Academia can further align for success. (Part II of this series on Education)

Sincerely,

Dr. Tony Branda

Source by tony