A Single Customer View : The Secret Weapon Everyone Must Use

Insurers have a lot of customer data across different systems peppered throughout their enterprise. Customer data would typically live in multiple systems like CRM, Billing, Policy Administration, and so on. This approach however suffers from multiple challenges:

 

  1. Duplicate data across multiple systems
  2. Multiple Versions of the same data point
  3. No Single source of truth
  4. No correlation between cause and action
  5. Completely under utilized customer interaction data

 

Imagine a structure so flexible and scalable that, that it could bring all your data sources together, irrespective of the data formats, tie in with the customer key result areas (KRAs) and at the same time deliver predictable insights at the point of decision. In real time.

 

Enter single customer view. Or as we call it – Customer OneView.

 

CRUX OneView

OneView in Action

 

Customer OneView is part of Aureus data analytics platform called CRUX. OneView is not anything like a CRM. While a CRM would show only static information, OneView delivers intelligent, usable and real time insights that can be put to use immediately.

OneView can integrate with (atleast) four broad event data streams:

  1. Customer
  2. Relationship
  3. Transactions
  4. Interaction

 

Nitin had written about stream based data integration on his insightful post titled “Cheers to Stream Based Integration“

 

These data streams could originate across multiple data systems – Policy Admin, CRM, Billing, etc.. Between them, these four cover some of the most critical customer data, that often lies under utilized. OneView not only brings these data streams together, but it also helps build a comprehensive customer life journey showing important milestones, critical customer interactions, sentiment at each interaction or transaction level as well as at a relationship level.  While OneView is a powerful insights delivery framework, it also helps to deliver the output of predictive analytics models in a form that is usable by the business users. OneView can help translate the output of the analytical models into usable insights. Imagine a customer sales representative talking to a customer, or a field sales agent going to meet a customer. OneView will give them unambiguous insight into the customers history, sentiment and even potential  action to take, without burdening them with the Hows and Whys.

 

Imagine a typical customer cross sell scenario. Most organizations tend to throw (figuratively speaking) the entire product catalog at the customer without any consideration for their lifestage needs, portfolio, demographics etc… Not only is this a highly ineffective cross sell approach, but it is a terrible customer experience approach. With OneView the customer service representative or the field service agents knows exactly what the customers latest and overall sentiment is, what her product portfolio looks like and which product the customer is most likely to buy.

 

The end goal of any activity is to make the end customers experience epic. By knowing how a customer is likely to behave, modeled on her previous behavior, insurance companies can ensure that the customer experience is always moving to the right.

 

OneView

Source: A Single Customer View : The Secret Weapon Everyone Must Use by analyticsweek

The Mainstream Adoption of Blockchain: Internal and External Enterprise Applications

The surging interest in blockchain initially pertained to its utility as the underlying architecture for the cryptocurrency phenomenon. Nonetheless, its core attributes (its distributed ledger system, immutability, and requisite permissions) are rapidly gaining credence in an assortment of verticals for numerous deployments.

Blockchain techniques are routinely used in several facets of supply chain management, insurance, and finance. In order to realize the widespread adoption rates many believe this technology is capable of, however, blockchain must enhance the very means of managing data-driven processes, similar to how applications of Artificial Intelligence are attempting to do so.

Today, there are myriad options for the enterprise to improve operations by embedding blockchain into fundamental aspects of data management. If properly architected, this technology can substantially impact facets of Master Data Management, data governance, and security. Additionally, it can provide these advantages not only between organizations, but also within them, operating as what Franz CEO Jans Aasman termed “a usability layer on top” of any number of IT systems.

Customer Domain Management
A particularly persuasive use case for the horizontal adoption of blockchain is deploying it to improve customer relations. Because blockchain essentially functions as a distributed database in which transactions between parties must be validated for approval (via a consensus approach bereft of centralized authority), it’s ideal for preserving the integrity of interactions between the enterprise and valued customers. In this respect it can “create trusted ledgers for customers that are completely invisible to the end user,” Aasman stated. An estimable example of this use case involves P2P networks, in which “people just use peer-to-peer databases that record transactions,” Aasman mentioned. “But these peer-to-peer transactions are checked by the blockchain to make sure people aren’t cheating.” Blockchain is used to manage transactions between parties in supply chains in much the same way. Blockchain aids organizations with this P2P customer use case because without it, “it’s very, very complicated for normal people to get it done,” Aasman said about traditional approaches to inter-organization ledger systems. With each party operating on a single blockchain, however, transactions become indisputable once they are sanctioned between the participants.

Internal Governance and Security
Perhaps the most distinguishable feature of the foregoing use case is the fact that in most instances, end users won’t even know they’re working with blockchain. What Aasman called an “invisible” characteristic of the blockchain ledger system is ideal for internal use to monitor employees in accordance with data governance and security procedures. Although blockchain supports internal intelligence or compliance for security and governance purposes, it’s most applicable to external transactions between organizations. In finance—just like in supply chain or in certain insurance transactions—“you could have multiple institutions that do financial transactions between each other, and each of them will have a version of that database,” Aasman explained. Those using these databases won’t necessarily realize they’re fortified by blockchain, and will simply use them as they would any other transactional system. In this case, “an accountant, a bookkeeper or a person that pays the bills won’t even know there’s a blockchain,” commented Aasman. “He will just send money or receive money, but in the background there’s blockchain making sure that no one can fool with the transactions.”

Master Data Management
Despite the fact that individual end users may be ignorant of the deployment of blockchain in the above use cases, it’s necessary for underlying IT systems to be fully aware of which clusters are part of this ledger system. According to Aasman, users will remain unaware of blockchain’s involvement “unless, of course, someone was trying to steal money, or trying to delete intermediate transactions, or deny that he sent money, or sent the same money twice. Then the system will say hey, user X has engaged in a ‘confusing’ activity.” In doing so, the system will help preserve adherence to company policies related to security or data governance issues.

Since organizations will likely employ other IT systems without blockchain, Master Data Management hubs will be important for “deciding for which transactions this applies,” Aasman said. “It’s going to be a feature of MDM.” Mastering the data from blockchain transactions with centralized MDM approaches can help align this data with others vital to a particular business domain, such as customer interactions. Aasman revealed that “the people that make master data management have to specify for which table this actually is true. Not the end users: the architects, the database people, the DBAs.” Implementing the MDM schema for which to optimize such internal applications of blockchain alongside those for additional databases and sources can quickly become complex with traditional methods, and may be simplified via smart data approaches.

Overall Value
The rapidity of blockchain’s rise will ultimately be determined by the utility the enterprise can derive from its technologies, as opposed to simply limiting its value to financial services and cryptocurrency. There are just as many telling examples of applying blockchain’s immutability to various facets of government and healthcare, or leveraging smart contracts to simplify interactions between business parties. By using this technology to better customer relations, reinforce data governance and security, and assist specific domains of MDM, organizations get a plethora of benefits from incorporating blockchain into their daily operations. The business value reaped in each of these areas could contribute to the overall adoption of this technology in both professional and private spheres of life. Moreover, it could help normalize blockchain as a commonplace technology for the contemporary enterprise.

Source: The Mainstream Adoption of Blockchain: Internal and External Enterprise Applications

Cloud Migrations: Big Challenges, Big Opportunities

When your organization decides to pull the trigger on a cloud migration, a lot of stuff will start happening all at once. Regardless of how long the planning process has been, once data starts being relocated, a variety of competing factors that have all been theoretical become devastatingly real: frontline business users still want to be able to run analyses while the migration is happening, your data engineers are concerned with the switch from whatever database you were using before, and the development org has its own data needs. With a comprehensive, BI-focused data strategy, you and your stakeholders will know what your ideal data model should look like once all your data is moved over. This way, as you’re managing the process and trying to keep everyone happy, you end in a stronger place when your migration is over than you were at the start, and isn’t that the goal?

BI-Focus and Your Data Infrastructure

“What does all this have to do with my data model?” you might be wondering. “And for that matter, my BI solution?”

I’m glad you asked, internet stranger. The answer is everything. Your data infrastructure underpins your data model and powers all of your business-critical IT systems. The form it takes can have immense ramifications for your organization, your product, and the new things you want to do with it (and how you want to build and expand on it and your feature offerings). Your data infrastructure is hooked into your BI solution via connectors, so it’ll work no matter where the data is stored. Picking the right data model, once all your data is in its new home, is the final piece that will allow you to get the most out of it with your BI solution. If you don’t have a BI solution, the perfect time to implement is once all your data is moved over and your model is built. This should all be part of your organization’s holistic cloud strategy, with buy-in from major partners who are handling the migration.

Cloud Migration

Picking the Right Database Model for You

So you’re giving your data a new home and maybe implementing a BI solution when it’s all done. Now, what database model is right for your company and your use case? There are a wide array of ways to organize data, depending on what you want to do with it.

One of the broadest is a conceptual model, which focuses on representing the objects that matter most to the business and the relationships between them (vs being a model of the data about those objects). This database model is designed principally for business users. Compare this to a physical model, which is all about the structure of the data. In this model, you’ll be dealing with tables, columns, relationships, and foreign keys, which distinguish the connections between the tables.

Now, let’s say you’re only focused on representing your data organization and architecture graphically, putting aside the physical usage or database management framework. In cases like these, a logical model could be the way to go. Examples of these types of databases include relational (dealing with data as tables or relations), network (putting data in the form of records), and hierarchical (which is a progressive tree-type structure, with each branch of the tree showing related records). These models all feature a high degree of standardization and cover all entities in the dataset and the relationships between them.

Got a wide array of different objects and types of data to deal with? Consider an object-oriented database model, sometimes called a “hybrid model.” These models look at their contained data as a collection of reusable software pieces, all with related features. They also consolidate tables but aren’t limited to the tables, giving you freedom when dealing with lots of varied data. You can use this kind of model for multimedia items you can’t put in a relational database or to create a hypertext database to connect to another object and sort out divergent information.

Lastly, we can’t help but mention the star schema here, which has elements arranged around a central core and looks like an asterisk. This model is great for querying informational indexes as part of a larger data pool. It’s used to dig up insights for business users, OLAP cubes, analytics apps, and ad-hoc analyses. It’s a simple, yet powerful, structure that sees a lot of usage, despite its simplicity.

Now What?

Whether you’re building awesome analytics into your app or empowering in-house users to get more out of your data, knowing what you’re doing with your data is key to maintaining the right models. Once you’ve picked your database, it’s time to pick your data model, with an eye towards what you want to do with it once it’s hooked into your BI solution.

Worried about losing customers? (Who isn’t?) A predictive churn model can help you get ahead of the curve by putting time and attention into relationships that are at risk of going sour. On the other side of the coin, predictive up- and cross-sell models can show you where you can get more money out of a customer and which ones are ripe to deepen your financial relationship.

What about your marketing efforts? A customer segmentation data model can help you understand the buying behaviors of your current customers and target groups and which marketing plays are having the desired effect. Or go beyond marketing with “next-best-action models” that take into account life events, purchasing behaviors, social media, and anything else you can get your hands on so that you can figure out what’s the next action with a given target (email, ads, phone call, etc.) to have the greatest impact. And predictive analyses aren’t just for humancentric activities—manufacturing and logistics companies can take advantage of maintenance models that can let you circumvent machine breakdowns based on historical data. Don’t get caught without a vital piece of equipment again.

Bringing It All Together with BI

Staying focused on your long-term goals is an important key to success. Whether you’re building a game-changing product or rebuilding your data model, having a firmly-defined goal makes all the difference when it comes to the success of your enterprise. If you’re already migrating your data to the cloud, then you’re at the perfect juncture to pick the right database and data models for your eventual use cases. Once these are set up, they’ll integrate seamlessly with your BI tool (and if you don’t have one yet, it’ll be the perfect time to implement one). Big moves like this represent big challenges, but also big opportunities to make lay the foundation for whatever you’re planning on building. Then you just have to build it!

Cloud Migration

Source: Cloud Migrations: Big Challenges, Big Opportunities by analyticsweek

IBM Invests to Help Open-Source Big Data Software — and Itself

The IBM “endorsement effect” has often shaped the computer industry over the years. In 1981, when IBM entered the personal computer business, the company decisively pushed an upstart technology into the mainstream.

In 2000, the open-source operating system Linux was viewed askance in many corporations as an oddball creation and even legally risky to use, since the open-source ethos prefers sharing ideas rather than owning them. But IBM endorsed Linux and poured money and people into accelerating the adoption of the open-source operating system.

On Monday, IBM is to announce a broadly similar move in big data software. The company is placing a large investment — contributing software developers, technology and education programs — behind an open-source project for real-time data analysis, called Apache Spark.

The commitment, according to Robert Picciano, senior vice president for IBM’s data analytics business, will amount to “hundreds of millions of dollars” a year.

Photo courtesy of Pingdom via Flickr
Photo courtesy of Pingdom via Flickr

In the big data software market, much of the attention and investment so far has been focused on Apache Hadoop and the companies distributing that open-source software, including Cloudera, Hortonworks and MapR. Hadoop, put simply, is the software that makes it possible to handle and analyze vast volumes of all kinds of data. The technology came out of the pure Internet companies like Google and Yahoo, and is increasingly being used by mainstream companies, which want to do similar big data analysis in their businesses.

But if Hadoop opens the door to probing vast volumes of data, Spark promises speed. Real-time processing is essential for many applications, from analyzing sensor data streaming from machines to sales transactions on online marketplaces. The Spark technology was developed at the Algorithms, Machines and People Lab at the University of California, Berkeley. A group from the Berkeley lab founded a company two years ago, Databricks, which offers Spark software as a cloud service.

Spark, Mr. Picciano said, is crucial technology that will make it possible to “really deliver on the promise of big data.” That promise, he said, is to quickly gain insights from data to save time and costs, and to spot opportunities in fields like sales and new product development.

IBM said it will put more than 3,500 of its developers and researchers to work on Spark-related projects. It will contribute machine-learning technology to the open-source project, and embed Spark in IBM’s data analysis and commerce software. IBM will also offer Spark as a service on its programming platform for cloud software development, Bluemix. The company will open a Spark technology center in San Francisco to pursue Spark-based innovations.

And IBM plans to partner with academic and private education organizations including UC Berkeley’s AMPLab, DataCamp, Galvanize and Big Data University to teach Spark to as many as 1 million data engineers and data scientists.

Ion Stoica, the chief executive of Databricks, who is a Berkeley computer scientist on leave from the university, called the IBM move “a great validation for Spark.” He had talked to IBM people in recent months and knew they planned to back Spark, but, he added, “the magnitude is impressive.”

With its Spark initiative, analysts said, IBM wants to lend a hand to an open-source project, woo developers and strengthen its position in the fast-evolving market for big data software.

By aligning itself with a popular open-source project, IBM, they said, hopes to attract more software engineers to use its big data software tools, too. “It’s first and foremost a play for the minds — and hearts — of developers,” said Dan Vesset, an analyst at IDC.

IBM is investing in its own future as much as it is contributing to Spark. IBM needs a technology ecosystem, where it is a player and has influence, even if it does not immediately profit from it. IBM mainly makes its living selling applications, often tailored to individual companies, which address challenges in their business like marketing, customer service, supply-chain management and developing new products and services.

“IBM makes its money higher up, building solutions for customers,” said Mike Gualtieri, a analyst for Forrester Research. “That’s ultimately why this makes sense for IBM.”

To read the original article on The New York Times, click here.

Source: IBM Invests to Help Open-Source Big Data Software — and Itself

Creating Value from Analytics: The Nine Levers of Business Success

IBM just released the results of a global study on how businesses can get the most value from Big Data and analytics. They found nine areas that are critical to creating value from analytics. You can download the entire study here.

IBM Institute for Business Value surveyed 900 IT and business executives from 70 countries from June through August 2013. The 50+ survey questions were designed to help translate concepts relating to generating value from analytics into actions.

Nine Levers to Value Creation

Figure 1. Nine Levers to Value Creation from Analytics
Figure 1. Nine Levers to Value Creation from Analytics. Click image to enlarge.

The researchers identified nine levers that help organizations create value from data. They compared leaders (those who identified their organization as substantially outperforming their industry peers) with the rest of the sample. They found that the leaders (19% of the sample) implement the nine levers to a greater degree than the non-leaders. These nine levers are:

  1. Source of value: Actions and decisions that generate results. Leaders tend to focus primarily on their ability to increase revenue and less so on cost reduction.
  2. Measurement: Evaluating the impact on business outcomes. Leaders ensure they know how their analytics impact business outcomes.
  3. Platform: Integrated capabilities delivered by hardware and software. Sixty percent of Leaders have predictive analytic capabilities, as well as simulation (55%) and optimization (67%) capabilities.
  4. Culture: Availability and use of data and analytics within an organization. Leaders make more than half of their decisions based on data and analytics.
  5. Data: Structure and formality of the organization’s data governance process and the security of its data. Two-thirds of Leaders trust the quality of their data and analytics. A majority of leaders (57%) adopt enterprise-level standards, policies and practices to integrate data across the organization.
  6. Trust: Organizational confidence. Leaders demonstrate a high degree of trust between individual employees (60% between executives, 53% between business and IT executives)
  7. Sponsorship: Executive support and involvement. Leaders (56%) oversee the use of data and analytics within their own departments, guided by an enterprise-level strategy, common policies and metrics, and standardized methodologies compared to the rest (20%).
  8. Funding: Financial rigor in the analytics funding process. Nearly two-thirds of Leaders pool resources to fund analytic investments. They evaluate these investments through pilot testing, cost/benefit analysis and forecasting KPIs.
  9. Expertise: Development of and access to data management and analytic skills and capabilities. Leaders share advanced analytics subject matter experts across projects, where analytics employees have formalized roles, clearly defined career paths and experience investments to develop their skills.

The researchers state that each of the nine levers have a different impact on the organization’s ability to deliver value from the data and analytics; that is, all nine levers distinguish Leaders from the rest but each Lever impacts value creation in different ways. Enable levers need to be in place before value can be seen through the Drive and Amplify levers. The nine levers are organized into three levels:

  1. Enable: These levers form the basis for big data and analytics.
  2. Drive: These levers are needed to realize value from data and analytics; lack of sophistication within these levers will impede value creation.
  3. Amplify: These levers boost value creation.

Recommendations: Creating an Analytic Blueprint

Figure 2. Analytics Blueprint for Creating Value from Data. Click image to enlarge
Figure 2. Analytics Blueprint for Creating Value from Data. Click image to enlarge

Next, the researchers offered a blueprint on how business leaders can translate the research findings into real changes for their own businesses. This operational blueprint consists of three areas: 1) Strategy, 2) Technology and 3) Organization.

1. Strategy

Strategy is about the deliberateness with which the organization approaches analytics. Businesses need to adopt practices around Sponsorship, Source of value and Funding to instill a sense of purpose to data and analytics that connects the strategic visions to the tactical activities.

2. Technology

Technology is about the enabling capabilities and resources an organization has available to manage, process, analyze, interpret and store data. Businesses need to adopt practices around Expertise, Data and Platform to create a foundation for analytic discovery to address today’s problems while planning for future data challenges.

3. Organization

Organization is about the actions taken to use data and analytics to create value. Businesses need to adopt practices around Culture, Measurement and Trust to enable the organization to be driven by fact-based decisions.

Summary

One way businesses are trying to outperform their competitors is through the use of analytics on their treasure trove of data. The IBM researchers were able to identify the necessary ingredients to extract value from analytics. The current research supports prior research on the benefits of analytics in business:

  1. Top-performing businesses are twice as likely to use analytics to guide future strategies and guide day-to-day operations compared to their low-performing counterparts.
  2. Analytic innovators 1) use analytics primarily to increase value to the customer rather than to decrease costs/allocate resources, 2) aggregate/integrate different business data silos and look for relationships among once-disparate metric and 3) secure executive support around the use of analytics that encourage sharing of best practices and data-driven insights throughout their company.

Businesses, to extract value from analytics, need to focus on improving strategic, technological and organizational aspects on how they treat data and analytics. The research identified nine area or levers executives can use to improve the value they generate from their data.

For the interested reader, I recently provided a case study (see: The Total Customer Experience: How Oracle Builds their Business Around the Customer) that illustrates how one company uses analytical best practices to help improve the customer experience and increase customer loyalty.

————————–

TCE Total Customer Experience

 

Buy TCE: Total Customer Experience at Amazon >>

In TCE: Total Customer Experience, learn more about how you can  integrate your business data around the customer and apply a customer-centric analytics approach to gain deeper customer insights.

 

Source

How to Price Your Predictive Application

In a recent survey of 500 application teams, predictive analytics was the number one feature being added to product roadmaps. It’s clear why: Predictive analytics solves critical business challenges and adds tremendous value to applications.

>> Related: How to Package and Price Embedded Analytics <<

As application vendors begin to add predictive insights into their applications, the question becomes: How should they price predictive capabilities? Let’s look at the best (and worst) ways to price and package predictive analytics with your product.

The Wrong Way to Price Predictive Analytics

Say you have a cloud-based customer churn reporting application, and your customers pay a $500 per month subscription to use it. Then you add predictive features as a new module. How will you price that module?

A common predictive analytics pricing strategy is to choose a price point—such as 30 to 50 percent—and add that on top of the current monthly license. If we pick 50 percent, then the new module will be priced at $250 per month. But is that a good way to price your product?

Your price should be based on supply and demand as well as the value your product offers. A typical dashboard application is already a commodity, so it is difficult to charge a premium since many vendors offer similar application functionality. On the other hand, predictive analytics is a hot technology. It provides unique insights into the future, and few vendors have incorporated it in their products. Those who roll out predictive analytics features first will have a head start in capturing the market.

So, even though it may seem like a 50 percent markup is reasonable, that doesn’t get to the heart of the value of predictive analytics. The answer to our earlier question is no: Basing the price of a new premium feature (predictive analytics) off of your commodity features (embedded analytics) is not a good way to price your product.

The Right Way to Price Predictive

If the traditional pricing strategy is out, how do we price your predictive application? Since our new module can predict who will churn, let’s identify what value the module offers for a customer. Say a customer is losing one million dollars annually due to churn and is looking to reduce that by 20 percent a year. By identifying who is likely to churn and taking a proactive approach, the predictive module can help your customer save $200,000 a year—that’s a $200,000 value!

So, how much can you charge a customer for that type of value? Fifteen to 30 percent is reasonable, but let’s be very conservative and say you only charge 10 percent. That’s a price of $20,000 per year (or $1,667 per month) for the predictive module.

Remember, your current churn dashboard costs customers $500 per month. We’ve given the new predict module a starting price of $1,667 per month—that’s three times higher than your current commodity dashboard pricing. This may seem like a lot, but it’s your marketing department’s job to create a strong sales pitch that clearly conveys the value to your customers.

The Best Way to Price Predictive: Think Outside the Box

Is there something we can do to make the predictive analytics pricing (and the total application package) more appealing to customers? Yes, there is!

Since the new predict module is adding much more value than your dashboard reporting application, your product manager should lead all sales with it. It doesn’t make sense to sell your customers on the old application and then mention the optional predictive module later on. The new predictive application is going to be the most valuable to customers, so make it the focus of the application. The best way to price predictive analytics is to create a new package for, say, $2,000 a month. This should include everything: your commodity dashboard application and your new predictive capabilities.

In summary: Push yourself to calculate the value your product offers. Challenge your marketing team to figure out innovative ways to articulate that value. And finally, come up with forward-thinking packages that lead with your predictive functionality.

See how Logi can help with your next predictive analytics project. Watch a free demo of Logi Predict today >

 

Originally Posted at: How to Price Your Predictive Application by analyticsweek

Data Driven Innovation: A Primer

Data Driven Innovation A Primer
Data Driven Innovation A Primer

We are hearing all the hoopla about Big-data. How it is radically changing the way we look at company data and provide data driven reasoning for better and less risky decision making. Innovation is one such area. Big-data could provide a real lift to DDI. Having a data driven approach will help in better, targeted and relevant innovations that are craved by the clients/ customers. This bottom-up approach at its most effective form could be easily conceived by a good data driven innovation strategy.

Now let’s get into the primer on DDI- What is entails and how could one leverage that. Here is the Who, What, Where, Why and When of Data Driven innovation.

Why use DDI?
Let us address why do we actually need to use DDI- what will it buy us. Consider a situation where you have to come up with your next best product/feature/innovation. Where would that come from? From your gut based on some hunch or from hardcore actual data from right sources. Hunch based discoveries are great but their failure rate is higher. Also, they are difficult to validate as the implementer has to do various focus groups which in themselves are flawed to certain extent. Now consider a case where your product-customer interactions, operations fill you up on what is relevant and you can use data to understand its impact on the organizations. This helps you identify what matters most and helps you choose the idea with substantial data to back up the theory. So, there is no need for spending money on focus groups, but there is a need to leverage real interactions with the real customers/people leading to real results. This ultimately means lesser chance of failure and cost effective way to find the next big thing that is most craved by your customer or organization. This reduces the risk of failure substantially and puts you at ease. So, DDI is important and it could provide a sustainable and continuous way to innovate, iterate and improve.

What is DDI?
Data driven innovation, as name suggests is the way through which data is used for learning about new features, modifications, product ideas that is most cherished by your current customers, market landscape. However, its usage and manifestation in an organization could be different based on its structure, maturity, usage and implementation. Its definition could very well incorporate the application and purpose it is set to achieve. For some organization DDI is a way for finding process improvements, for others it is way to learn from customers and how they use products to learn about next features and/or products, for some it is a sustained source of learning about people, process and technology. But, I would put it in generic and call it “A method to innovate/iterate/improve using sustainable & continuous ways using data based decision process, where data is sourced to help learn about people, process and technology critical to your organization”.

Who could use DDI?
Data driven innovation is not everyone’s play. Not that it is too difficult to implement or it requires too much investment, but it requires certain maturity in your data handling capability before getting started. If you are diligent about using data to learn about your processes and its effectiveness, it will be easier. If you are not yet focused towards using data around your product and processes, you still might have some distance to travel before you delve into data driven methodologies. It is never late to start planning and executing strategies to introduce and leverage data points that go beyond your traditional direct customer & product data. So, in short, DDI could be used by any organization that is serious about learning from data. In Fact, smaller the firm, the better it could be implemented and lesser it would cost. The more silos, more complicated product/process structure, the more it is going to cost, to execute. In short, you could safely tag your DDI initiative on your management, the more selling your management requires for a data driven project, the farther you are from pursuing a full scale DDI. So, first get the leadership buyin on its value and then start shaping your organization to implement DDI.

Where will DDI take place?
Yes, you could figure out that DDI is a system that runs on data driven insights and data is everywhere in an organization so, it could show up anywhere. But, it is a bit trickier than that. The toughest part is not when data driven decision making is running in an organization’s DNA but the time when organizations decides to get started. DDI requires some careful understanding of how data works and how it could be used to get insights. Therefore, place where it should start is important. The best starting place for DDI could be around project managers, or if your organization is big enough to accommodate project management office, for agile companies, it should be around group leads. In short, DDI should start from a place that is not a stranger to data and understands how to handle it. So, in short, it could exist everywhere but it should start from a place that provides the most amiable surroundings required by a data driven project. Project managers, supervisors, PMOs are meant to keep a tap on the progress, so they possess some basic skills to function as data driven professionals and therefore, could help the best in understanding and executing a good DDI strategy.

When is the time to delve into DDI?
In short, the sooner the better. DDI requires substantial amount of preplanning and dedication. The sooner organizations delve into data driven innovation, the better will be its execution and value to the organization. A good data driven innovation implementation requires some practice and iterations on data models, validations, analysis and reporting. So, a successful implementation will rarely emerge from first implementation and would require some iteration. Also, the sooner the organization will start in direction of implementing DDI, the better it is because organizations will start acting in ways to facilitate smart data handling, which will have its own benefits. But, one caveat is that organization should have data to play with. Doing DDI sooner when data handling capabilities are not established could confuse the processes and steer the implementation in wrong directions. So, we could reword our sooner as “the sooner the organizations have started embracing data based decision making process, the better”.

To summarize, DDI is important and beneficial to any organization. It has the tendency to make any organization grow sustainably without having to invest too much into research and development. It support continuous improvements and that too without investing too much money and it could re-utilize the same infrastructure for sustainable leanings.

As a treat, here is a video on Big-Data and Innovation:

Originally Posted at: Data Driven Innovation: A Primer by v1shal

What Your Social Data Knows About You – @SethS_D Author #NYTBestSeller Everybody Lies

What Your Social Data Knows About You – @SethS_D Author #NYTBestSeller Everybody Lies #FutureJobs#JobsOfFuture #Podcast

In this podcast Seth Stephens-Davidowitz (@SethD_S), author of New York Times Bestseller Everybody Lies, discussed what our social data knows about us. He shares some critical insights into human psyche on how humans behaves differently to machines then fellow humans. This shed some interesting light on how #JobsOfFuture would use our social and technology interactions to create experience that best represent and benefit us. He shared some insights into what future of work would look like. He sheds some insights into how businesses could use data to create a great experience for our employees, workers, clients, and partners. This is a great podcast for anyone looking to understand the depth of insights that data could create.

Seth’s Book:
Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are by Seth Stephens-Davidowitz amzn.to/2OA0YBs

Seth’s Recommended Read:
Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker amzn.to/2Kl2nsr

Podcast Link:
iTunes: math.im/jofitunes
Youtube: math.im/jofyoutube

Seth’s BIO:
Seth Stephens-Davidowitz has used data from the internet — particularly Google searches — to get new insights into the human psyche.

Seth has used Google searches to measure racism, self-induced abortion, depression, child abuse, hateful mobs, the science of humor, sexual preference, anxiety, son preference, and sexual insecurity, among many other topics.

His 2017 book, Everybody Lies, published by HarperCollins, was a New York Times bestseller; a PBS NewsHour Book of the Year; and an Economist Book of the Year.

Seth worked for one-and-a-half years as a data scientist at Google and is currently a contributing op-ed writer for the New York Times. He is a former visiting lecturer at the Wharton School at the University of Pennsylvania.
He received his BA in philosophy, Phi Beta Kappa, from Stanford, and his PhD in economics from Harvard.

In high school, Seth wrote obituaries for the local newspaper, the Bergen Record, and was a juggler in theatrical shows. He now lives in Brooklyn and is a passionate fan of the Mets, Knicks, Jets, Stanford football, and Leonard Cohen.

About #Podcast:
#JobsOfFuture is created to spark the conversation around the future of work, worker and workplace. This podcast invite movers and shakers in the industry who are shaping or helping us understand the transformation in work.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ play.analyticsweek.com/guest/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#JobsOfFuture #FutureOfWork #FutureOfWorker #FutuerOfWorkplace #Work #Worker #Workplace

Originally Posted at: What Your Social Data Knows About You – @SethS_D Author #NYTBestSeller Everybody Lies