How big data is transforming the construction industry

Big data analytics is being adopted at a rapid rate across every industry. It enables businesses to manage and analyze vast amounts of data at ultrafast speeds, and obtain valuable insights that can improve their decision-making processes.

One of the industries that are reaping the benefits of this technology is the construction industry. Construction companies are using big data to perform a wide range of tasks, from data management to pre-construction analysis.

Here is a look at how big data is transforming the construction industry…

How Construction Companies are Leveraging Big Data Analytics

Handling Large Amounts of Data

Many construction companies need to juggle many projects at the same time, and they have to collect, produce, organize and analyze a lot of data because of these projects.

Other than creating work reports and progress reports, they also have to manage technical information on various aspects of their projects. All the unstructured data that is collected and generated can burden their databases.

Big data solutions make it possible for construction companies to process massive amounts of data at unprecedented speeds, enabling them to save substantial time and effort, and focus more on the job site instead of IT issues.

Depending on which big data tools they use, they can improve almost every data-related process, from database management to report creation.

According to an article entitled “How Big Data is Transforming the World of Finance“, big data can help businesses create reports on their operations more frequently, or in real time, so that they can make well-informed decisions on a consistent basis.

Predicting Risk

In order to plan and execute projects effectively, construction companies need to be able to predict risks accurately through intelligent use of data.

By implementing big data analytics, they can gain valuable insights that enable them to improve cost certainty, identify and avoid potential problems, and find opportunities for efficiency improvements.

One example of a construction company that is using big data analytics to predict risk is Democrata.

Democrata conducts surveys to gain a better understanding of the impact of new roads, high rail links and other construction projects, and uses big data analytics to perform searches and queries on data sets to obtain insights that can lead to better and faster decision-making.

Solving Problems

The ability to solve problems quickly can contribute significantly to the successful completion of construction projects.

Liberty Building Forensics Group is a company that investigates and solves construction and design problems, and it has provided consultation on over 500 projects worldwide, including a Walt Disney project.

According to the company, forensic issues usually occur in major construction projects, and they can cause big problems, such as failure to meet deadlines, if they are not properly assessed.

In order to fix forensic issues efficiently, construction companies have to be able to collect the right data in an organized way and make the data accessible to the right people at the right time. This can be achieved through the implementation of big data solutions.

Presently, big data analytics is relatively immature in terms of functionality and robustness.

As it continues to become more advanced, it will be more widely adopted in the construction industry.

John McMalcolm is a freelance writer who writes on a wide range of subjects, from social media marketing to technology.

Originally posted via “How big data is transforming the construction industry”

Originally Posted at: How big data is transforming the construction industry

Three final talent tips: how to hire data scientists

This last post focuses on less tangible aspects, related to curiosity, clarity about what kind of data scientist you need, and having appropriate expectations when you hire.

8. Look for people with curiosity and a desire to solve problems

Radhika Kulkarni, PhD in Operations Research, Cornell University, teaching calculus as a grad student.

 

 

 

 

 

 

 

As I  blogged previously, Greta Roberts of Talent Analytics will tell you that the top traits to look for when hiring analytical talent are curiosity, creativity, and discipline, based on a study her organization did of data scientists. It is important to discover if your candidates have these traits, because they are necessary elements to find a practical solution and separate candidates from those who may get lost in theory. My boss Radhika Kulkarni, the VP of Advanced Analytics R&D at SAS, self-identified this pattern when she arrived at Cornell to pursue a PhD in math. This realization prompted her to switch to operations research, which she felt would allow her to pursue investigating practical solutions to problems, which she preferred to more theoretical research.

That passion continues today, as you can hear Radhika describe in this video on moving the world with advanced analytics. She says “We are not creating algorithms in an ivory tower and throwing it over the fence and expecting that somebody will use it someday. We actually want to build these methods, these new procedures and functionality to solve our customers’ problems.” This kind of practicality is another key trait to evaluate in your job candidates, in order to avoid the pitfall of hires who are obsessed with finding the “perfect” solution. Often, as Voltaire observed, “Perfect is the enemy of good.” Many leaders of analytical teams struggle with data scientists who haven’t yet learned this lesson. Beating a good model to death for that last bit of lift leads to diminishing returns, something few organizations can afford in an ever-more competitive environment. As an executive customer recently commented during the SAS Analytics Customer Advisory Board meeting, there is an “ongoing imperative to speed up that leads to a bias toward action over analysis. 80% is good enough.”

9. Think about what kind of data scientist you need

Ken Sanford, PhD in Economics, University of Kentucky, speaking about how economists make great data scientists at the 2014 National Association of Business Economists Annual Meeting. (Photo courtesy of NABE)

Ken Sanford describes himself as a talking geek, because he likes public speaking. And he’s good at it. But not all data scientists share his passion and talent for communication. This preference may or may not matter, depending on the requirements of the role. As this Harvard Business Review blog post points out, the output of some data scientists will be to other data scientists or to machines. If that is the case, you may not care if the data scientist you hire can speak well or explain technical concepts to business people. In a large organization or one with a deep specialization, you may just need a machine learning geek and not a talking one! But many organizations don’t have that luxury. They need their data scientists to be able to communicate their results to broader audiences. If this latter scenario sounds like your world, then look for someone with at least the interest and aptitude, if not yet fully developed, to explain technical concepts to non-technical audiences. Training and experience can work wonders to polish the skills of someone with the raw talent to communicate, but don’t assume that all your hires must have this skill.

10. Don’t expect your unicorns to grow their horns overnight

Annelies Tjetjep, M.Sc., Mathematical Statistics and Probability from the University of Sydney, eating frozen yogurt.

Annie Tjetjep relates development for data scientists to frozen yogurt, an analogy that illustrates how she shines as a quirky and creative thinker, in addition to working as an analytical consultant for SAS Australia. She regularly encounters customers looking for data scientists who have only chosen the title, without additional definition. She explains: “…potential employers who abide by the standard definitions of what a ‘data scientist’ is (basically equality on all dimensions) usually go into extended recruitment periods and almost always end up somewhat disappointed – whether immediately because they have to compromise on their vision or later on because they find the recruit to not be a good team player….We always talk in dimensions and checklists but has anyone thought of it as a cycle? Everyone enters the cycle at one dimension that they’re innately strongest or trained for and further develop skills of the other dimensions as they progress through the cycle – like frozen yoghurt swirling and building in a cup…. Maybe this story sounds familiar… An educated statistician who picks up the programming then creativity (which I call confidence), which improves modelling, then business that then improves modelling and creativity, then communication that then improves modelling, creativity, business and programming, but then chooses to focus on communication, business, programming and/or modelling – none of which can be done credibly in Analytics without having the other dimensions. The strengths in the dimensions were never equally strong at any given time except when they knew nothing or a bit of everything – neither option being very effective – who would want one layer of froyo? People evolve unequally and it takes time to develop all skills and even once you develop them you may choose not to actively retain all of them.”

So perhaps you hire someone with their first layer of froyo in place and expect them to add layers over time. In other words, don’t expect your data scientists to grow their unicorn horns overnight. You can build a great team if they have time to develop as Annie describes, but it is all about having appropriate expectations from the beginning.

To learn more, check out this series from SAS on data scientists, where you can read Patrick Hall’s post on the importance of keeping the science in data science, interviews with data scientists, and more.

And if you want to check out what a talking geek sounds like, Ken will be speaking at a National Association of Business Economists event next week in Boston – Big Data Analytics at Work: New Tools for Corporate and Industry Economics. He’ll share the stage with another talking geek, Patrick Hall, a SAS unicorn I wrote about it in my first post.

To read the original article on SAS, click here.

Source

Aligning Sales Talent to Drive YOUR Business Goals

5steps_analytics
A confluence of new capabilities is creating an innovative, more precise approach to performance improvement. New approaches include advanced analytics, refined sales competency and behavioral models, adaptive learning, and multiple forms of technology enablement. In a prior post (The Myth of the Ideal Sales Profile) we explored an emerging new paradigm that is disrupting traditional thinking with respect to best practices: the world according to YOU.

However, with only 17% of sales organizations leveraging sales talent analytics (TDWI Research), it seems that most CSO’s and their HR business partners are gambling — using intuition as the basis for making substantial investments in sales development initiatives. If the gamble doesn’t pay off, then the investment is wasted.

Is your sales talent aligned to your company’s strategy of increasing revenue? According to the Conference Board, 73% of CEO’s say no. This lack of alignment is the main reason why 86% of CSO’s expect to miss their 2015 revenue targets (CSO Insights). The ability to properly align your sales talent to your company’s business goals is the difference between being in the 86% or the 14%.

What Happens When You Assume?

Historically, sales and Human Resource leaders based sales talent alignment decisions — both development of the existing team and acquisition of future talent — on assumptions and somewhat subjective data.

Common practices include:

  • Polling the field to determine the focus for sales training
  • Hiring sales talent based largely on the subjective opinion of interviewers
  • Defining your “ideal seller profile” based on the guidance of industry pundits
  • Making a hiring decision based on the fact that the candidate made Achiever’s Club 3 of the last 5 years at their previous company
  • Deploying a sales training program based on what a colleague did at their last company

Aligning sales talent based on any of the above is likely to land your company in the 86% because these approaches fail far more times than they succeed. They fail to consider the many cause-and-effect elements that impact success in your company, in your markets, for your products, and for your customers. As proof of their low success rate, a groundbreaking study by ES Research found that 90% of sales training [development initiatives] had no lasting impact after 120 days. And the news isn’t any better when it comes to sales talent acquisition; Accenture reports that the average ramp-up time for new reps is 7-12 months.

Defining YOUR Ideal Seller Profile(s)

So how does your organization begin to apply the “new way” (see illustration below) as an approach to optimize sales performance? It begins with zeroing in on the capabilities of your salespeople that align most closely to the specific goals of your business. In essence, it means understanding what the YOUR ideal seller profiles are.

Applying the new way begins with specific business goals of your company. What if market share growth was the preeminent strategic goal for your organization? Would it not be extremely valuable to understand which sales competencies were most likely to impact that aspect of your corporate strategy? The obvious answer is yes; and the obvious question is how align and optimize sales to drive increased market share?

How does a CSO identify where to target development in order to have the biggest impact on business results?

By using facts as the basis for these substantial investments. Obtaining facts requires several essential ingredients. The first is a rigorous, comprehensive model for sales competencies; that is, a well-defined model of “what good looks like” for a broad range of sales competencies. This model can be adapted for a specific selling organization, and provides the baseline sales-specific assessments (personality, knowledge, cognitive ability, behavior, etc.).

Then, by applying advanced analytics, including Structural Equations Modeling (SEM) – we can begin to identify cause-effect relationships between specific competencies and the metrics and goals of YOUR organization. With SEM, CSO’s can statistically identify the knowledge and behavior that set top-performers apart from the rest of their team. With this valuable insight, the organization can now align both talent development and acquisition to the company’s most important business goals.

Sales Talent Analytics Provide Proof

Times have changed. The days of aligning sales talent based on gut feel, assumptions or generally accepted best-practices are over. By leveraging sales talent analytics, today’s sales leader can apply a proven 3-step approach to stop gambling and get the facts to statistically pinpoint where to focus development of the sales team, quantifiably measure the business impact / ROI of that development, and improve the quality of new hires. But buyer beware; not all analytical approaches are equal. The vast majority leverage correlation-based analytics which can lead to erroneous conclusions.

By the way we’re not eschewing well designed research that provides insights into broader application of best practices. Aberdeen Group found that best-in-class sales teams that leverage data and analytics increased team quota attainment 12.3% YOY (vs. 1% for an average company) and increased average deal size 8% YOY (vs. 0.8%)

It’s time to define the ideal seller profile for YOUR company. In our next post in this series, we answer the question – how do we capitalize on that understanding to drive the highest impact on our business goals?

Source: Aligning Sales Talent to Drive YOUR Business Goals by analyticsweekpick

May 8, 2017 Health and Biotech analytics news roundup

HHS’s Price Affirms Commitment to Health Data Innovation: Secretary Price emphasized the need to decrease the burden on physicians.

Mayo Clinic uses analytics to optimize laboratory testing: The company Viewics makes software for the facility, which uses it to look for patterns and increase efficiency.

Nearly 10,000 Global Problem Solvers Yield Winning Formulas to Improve Detection of Lung Cancer in Third Annual Data Science Bowl: The winners of the competition, which challenged contestants to accurately diagnose lung scans, were announced.

Gene sequencing at Yale finding personalized root of disease; new center opens in West Haven: The Center for Genomic Analysis at Yale opened and is intended to help diagnose patients.

Source

7 Limitations Of Big Data In Marketing Analytics

Big data — the cutting edge of modern marketing or an overhyped buzzword? Columnist Kohki Yamaguchi dives in to some of the limitations of user-centered data.

As everyone knows, “big data” is all the rage in digital marketing nowadays. Marketing organizations across the globe are trying to find ways to collect and analyze user-level or touchpoint-level data in order to uncover insights about how marketing activity affects consumer purchase decisions and drives loyalty.

In fact, the buzz around big data in marketing has risen to the point where one could easily get the illusion that utilizing user-level data is synonymous with modern marketing.

This is far from the truth. Case in point, Gartner’s hype cycle as of last August placed “big data” for digital marketing near the apex of inflated expectations, about to descend into the trough of disillusionment.

It is important for marketers and marketing analysts to understand that user-level data is not the end-all be-all of marketing: as with any type of data, it is suitable for some applications and analyses but unsuitable for others.

Following is a list describing some of the limitations of user-level data and the implications for marketing analytics.

1. User Data Is Fundamentally Biased

The user-level data that marketers have access to is only of individuals who have visited your owned digital properties or viewed your online ads, which is typically not representative of the total target consumer base.

Even within the pool of trackable cookies, the accuracy of the customer journey is dubious: many consumers now operate across devices, and it is impossible to tell for any given touchpoint sequence how fragmented the path actually is. Furthermore, those that operate across multiple devices is likely to be from a different demographic compared to those who only use a single device, and so on.

User-level data is far from being accurate or complete, which means that there is inherent danger in assuming that insights from user-level data applies to your consumer base at large.

2. User-Level Execution Only Exists In Select Channels

Certain marketing channels are well suited for applying user-level data: website personalization, email automation, dynamic creatives, and RTB spring to mind.

In many channels however, it is difficult or impossible to apply user data directly to execution except via segment-level aggregation and whatever other targeting information is provided by the platform or publisher. Social channels, paid search, and even most programmatic display is based on segment-level or attribute-level targeting at best. For offline channels and premium display, user-level data cannot be applied to execution at all.

3. User-Level Results Cannot Be Presented Directly

More accurately, it can be presented via a few visualizations such as a flow diagram, but these tend to be incomprehensible to all but domain experts. This means that user-level data needs to be aggregated up to a daily segment-level or property-level at the very least in order for the results to be consumable at large.

4. User-Level Algorithms Have Difficulty Answering “Why”

Largely speaking, there are only two ways to analyze user-level data: one is to aggregate it into a “smaller” data set in some way and then apply statistical or heuristic analysis; the other is to analyze the data set directly using algorithmic methods.

Both can result in predictions and recommendations (e.g. move spend from campaign A to B), but algorithmic analyses tend to have difficulty answering “why” questions (e.g. why should we move spend) in a manner comprehensible to the average marketer. Certain types of algorithms such as neural networks are black boxes even to the data scientists who designed it. Which leads to the next limitation:

5. User Data Is Not Suited For Producing Learnings

This will probably strike you as counter-intuitive. Big data = big insights = big learnings, right?

Wrong! For example, let’s say you apply big data to personalize your website, increasing overall conversion rates by 20%. While certainly a fantastic result, the only learning you get from the exercise is that you should indeed personalize your website. While this result certainly raises the bar on marketing, but it does nothing to raise the bar formarketers.

Actionable learnings that require user-level data – for instance, applying a look-alike model to discover previously untapped customer segments – are relatively few and far in between, and require tons of effort to uncover. Boring, ol’ small data remains far more efficient at producing practical real-world learnings that you can apply to execution today.

6. User-Level Data Is Subject To More Noise

If you have analyzed regular daily time series data, you know that a single outlier can completely throw off analysis results. The situation is similar with user-level data, but worse.

In analyzing touchpoint data, you will run into situations where, for example, a particular cookie received – for whatever reason – a hundred display impressions in a row from the same website within an hour (happens much more often than you might think). Should this be treated as a hundred impressions or just one, and how will it affect your analysis results?

Even more so than “smaller” data, user-level data tends to be filled with so much noise and potentially misleading artifacts, that it can take forever just to clean up the data set in order to get reasonably accurate results.

7. User Data Is Not Easily Accessible Or Transferable

Because of security concerns, user data cannot be made accessible to just anyone, and requires care in transferring from machine to machine, server to server.

Because of scale concerns, not everyone has the technical know-how to query big data in an efficient manner, which causes database admins to limit the number of people who has access in the first place.

Because of the high amount of effort required, whatever insights that are mined from big data tend to remain a one-off exercise, making it difficult for team members to conduct follow-up analyses and validation.

All of these factors limit agility of analysis and ability to collaborate.

So What Role Does Big Data Play?

So, given all of these limitations, is user-level data worth spending time on? Absolutely — its potential to transform marketing is nothing short of incredible, both for insight generation as well as execution.

But when it comes to marketing analytics, I am a big proponent of picking the lowest-hanging fruit first: prioritizing analyses with the fastest time to insight and largest potential value. Analyses of user-level data falls squarely in the high-effort and slow-delivery camp, with variable and difficult-to-predict value.

Big data may have the potential to yield more insights than smaller data, but it will take much more time, consideration, and technical ability in order to extract them. Meanwhile, there should be plenty of room to gain learnings and improve campaign results using less granular data. I have yet to see such a thing as a perfectly managed account, or a perfectly executed campaign.

So yes, definitely start investing in big data capabilities. Meanwhile, let’s focus as much if not more in maximizing value from smaller data.

Note: In this article I treated “big data” and “user-level data” synonymously for simplicity’s sake, but the definition of big data can extend to less granular but more complex and varied data sets.

Originally posted via “7 Limitations Of Big Data In Marketing Analytics”


 

Originally Posted at: 7 Limitations Of Big Data In Marketing Analytics by analyticsweekpick

A Timeline of Future Technologies 2019-2055

Our friends at Futurism uses the data from National Academy of Sciences, SmartThings Future Living reports, Scientific American, University of Bristol and several other sources to create this fascinating infographics.

A Timeline of Future Technologies 2019-2055
A Timeline of Future Technologies 2019-2055

originally posted @ https://futurism.com/images/things-to-come-a-timeline-of-future-technology-infographic/

Source by v1shal

AtScale opens Hadoop’s big-data vaults to nonexpert business users

When it comes to business intelligence, most enterprise users are intimately acquainted with tools such as Microsoft Excel. They tend to feel less comfortable with data-management technologies like Hadoop—despite the considerable insights such tools could offer.

Enter AtScale, a startup that on Tuesday emerged from stealth with a new offering designed to designed to put those capabilities within closer reach. The AtScale Intelligence Platform is designed to enable interactive, multidimensional analyses on Hadoop from within standard BI tools such as Microsoft Excel, Tableau Software or QlikView, without the need for any data movement, custom drivers or a separate cluster.

“Today, millions of information workers could derive value from Hadoop, but their organizations have not been able to empower them to do so, either because their current toolset doesn’t work natively with Hadoop or because IT doesn’t have the tools to provision them with secure, self-service access,” said Dave Mariani, AtScale’s founder and CEO.

In essence, AtScale’s platform aims to give business users the ability to analyze in real time the entirety of their Hadoop data—tapping Hadoop SQL engines like Hive, Impala and Spark SQL—using the BI tools they are already familiar with. In that way, its intent is similar in many ways to that of Oracle, which recently unveiled new big-data tools of its own for nonexperts.

AtScale’s software strives to make big-data analytics accessible in several ways. Its cube designer, for instance, converts Hadoop into interactive OLAP cubes with full support for arrays, structs and non-scalars, enabling complex data to be converted into measures and dimensions that anyone can understand and manage, the company says.

“We have a community of more than 110 million users and a massive amount of data about how people play our games,” said Craig Fryar, head of business intelligence at Wargaming, creator of online game World of Tanks. “Our cluster stores billions of events that we can now easily explore in just a few clicks.

Originally posted via “AtScale opens Hadoop’s big-data vaults to nonexpert business users”

Source by analyticsweekpick

May 15, 2017 Health and Biotech analytics news roundup

NHS taps artificial intelligence to crack cancer detection: The UK’s health service, Intel, and the University of Warwick are collaborating to form a ‘digital repository’ of known tumor cells, which they hope to use to develop new algorithms for classification of those cells.

When Cancer Patients Should Ask For Genetic Sequencing: At Memorial Sloan Kettering Cancer Center, more than 10,000 tumor biopsies from patients with advanced cancer were sequenced. Many of the patients had mutations that could be addressed with current treatments

Intermountain makes strides in precision medicine, advanced imaging: The Utah-based health system has added technology to perform specific and sensitive sequencing of solid tumor cells as well as imaging technology to monitor wounds.

How providers can use analytics to manage risk in value-based care: Two examples are presented: a Maryland hospital and a New York physician’s network.

Source: May 15, 2017 Health and Biotech analytics news roundup by pstein