BARC Survey Shows New Benefits from Embedded Analytics

Application teams are embedding analytics in their products at an increasingly rapid pace. More than 85 percent of application teams have embedded dashboards, reports, and analytics in their software products, according to Logi’s 2018 State of Embedded Analytics Report. And they’re seeing value from their efforts: 92 percent of respondents say enhancing their products with analytics has increased competitive differentiation, with over 90 percent crediting it with improving win rates, increasing adoption, and reducing customer churn.

Now a new survey from the Business Application Research Center (BARC) indicates even more value may come from embedded analytics. According to The BI Survey 2018, the world’s largest annual survey of business intelligence (BI) software users, companies that encourage more users to adopt BI also see additional business benefits from their BI projects.

Related: New Study: Top 3 Trends in Embedded Analytics

 “Companies claiming to have achieved the most benefit from their BI tools (‘Best-in-Class’) have on average nine percent more BI users than those achieving the least benefit (‘Laggards’), suggesting that there is a relationship between the number of BI users and the degree of benefits an organization gains.” writes BARC in the report. “This should provide an incentive for businesses to maximize BI tool penetration and train as many employees as possible to use their BI tool.”

If more BI users means more business benefits, the natural question becomes, how do you get more BI users? As Logi’s own data from the 2018 State of Embedded Analytics Report shows, the best way to increase adoption of BI tools is to embed analytics in the applications people already use.Adoption of standalone vs embedded

In fact, embedded analytics sees twice the adoption rates of standalone BI solutions. Why? Because business users want to stay in one place, not jump from application to application to get what they need. In the 2017 State of Analytics Adoption Report, over 83 percent of business professionals expressed a strong desire to stay in one application, when and where a decision is needed, instead of wasting precious time switching applications. People clearly want their information in context of where they work, and embedded analytics delivers on this need.

According to our survey, 67 percent of application teams say time spent in their applications increased after they embedded analytics. On top of that, they cite the substantial business benefits of embedding analytics:

  • 96 percent of companies said embedded analytics contributes to overall revenue growth
  • 94 percent said it boosts user satisfaction
  • 93 percent said they’ve improved user experiences

 

Ready to embed analytics in your application? Gartner outlines best practices on evaluating solutions in its analyst paper, “5 Best Practices for Choosing an Embedded Analytics Platform Provider.”

 

Source by analyticsweek

Guide to business intelligence and health IT analytics

large_article_im1353_Data_Analytics

Introduction
Technology is frequently used as a tool through which healthcare providers and their IT departments can monitor and improve the business and personal performance of every aspect of their organization. For example, an analytics program that is deployed to examine a patient population’s medical data can then become the starting point for a provider’s business intelligence program. The results found by mining patient data can inform future care decisions and help the IT team discover any technology-related operational malfunctions.

There’s no doubt technology can be a valuable asset to healthcare practitioners when used properly, but convincing them to use new technology hasn’t been a cinch. Some physicians neglect clinical decision support tools in favor of consulting a colleague. A downside of healthcare organizations installing new technology containing patient data is that it creates additional security concerns. The ability for new technology to analyze data without improperly exposing protected health information will be key to determining how much it can improve the delivery of healthcare.

1Business intelligence
Applications of healthcare business intelligence

There is more data than ever for healthcare providers to use to maximize their operational efficiency. Information derived from social media and captured on patients’ mobile health devices are two examples. This section covers how providers are using business intelligence tools to analyze data and improve the experience of their patients. Business intelligence through cloud computing is an option for providers, but it comes with its own set of security issues.

Tip
Discover how providers apply business intelligence to big data

Social media is yet another source of data through which providers can monitor patients and health trends. Learn how they can apply this data to their business goals. Continue Reading

Tip
Business advantages of cloud have the attention of healthcare organizations

Security is a particularly strong concern for healthcare organizations that deploy cloud services. Continue Reading

Tip
Five keys to mastering healthcare business intelligence

A successful business intelligence program starts with good data. What’s required to turn that data into meaningful analysis may be a surprise. Continue Reading

Tip
Tips for patching analytics, business intelligence errors

Find out why healthcare analytics and business intelligence technology can fail, even after those systems are up and running. Continue Reading

Tip
Boost in computing power multiples power of health IT

Cloud computing and artificial intelligence are only two of the business intelligence tools that are molding the future of healthcare. Continue Reading
2Analytics at the point of care
Clinical decision support and health IT analytics

How can providers mine health data for information without exposing patients’ private information? That important question is examined in this section of the guide. Also, learn why some physicians have accepted the analysis provided to them via clinical decision support tools and why others still refuse to consult this form of technology for a second opinion when making a decision about a patient’s care. Like every other form of technology, healthcare analytics resources are only as good as their security and backup measures allow them to be. A cybersecurity expert explains how to approach protecting your health IT department from today’s threats.

News
The ups and downs of clinical decision support

A years-long government-sponsored study turned up some surprising results about the efficacy of analytics. Continue Reading

Podcast
Cybersecurity pro examines threats in healthcare

Analytics are no use unless healthcare organizations protect their data. Mac McMillan dishes out advice on what security precautions to take. Continue Reading

Tip
Privacy a top concern during clinical analysis

An analytics expert explains under which circumstances a patient’s identifying information should be available. Continue Reading

Tip
Analytics becoming a way of life for providers

Discover why more healthcare organizations are using analytics tools to keep up with regulatory changes. Continue Reading

Tip
Real-time analytics slowly working its way into patient care

Find out why physicians are wary of becoming too reliant on clinical decision support tools. Continue Reading

Tip
Quantity of data challenges healthcare analysts

There are a few simple steps health IT analysts should follow when examining data they are unfamiliar with. Continue Reading

Tip
Analytics backups preserve clinical decision support

Too many healthcare organizations take analytics for granted and don’t realize what would happen to their workflows if their backups failed. Continue Reading
3Population health management
How technology controls population health

Population health management, or the collective treatment of a group of patients, is an area that has matured along with the use of technology in healthcare. Though technology has come a long way, there are still hurdles, including those involving the exchange of health information among care facilities, that are causing hospitals to achieve treatment advances at different rates. This section contains information on why participating in an accountable care organization is one way for healthcare providers to commit to improving their population’s health and why that commitment has proven elusive for some.

Feature
Population health management in the home

Find out when healthcare services could become part of your cable bill. Continue Reading

Tip
Accountable care progress held up by technology

Technology that supports health information exchange is being adopted at a plodding rate. Learn why this is affecting accountable care organizations. Continue Reading

Feature
Karen DeSalvo, M.D. explains her public health mission

Karen DeSalvo goes into why public health goals shouldn’t be brushed aside. Continue Reading

Feature
Clinical decision support education a must

Too many physicians still don’t know how to use clinical decision support technology to their advantage. Continue Reading

Podcast
Chief information officer walks through population health process

A CIO of a New Jersey hospital system shares his organizations’ technology-based population health plan and how it will lead them to accountable care. Continue Reading

Feature
National health IT coordinator talks population health

The head of the Office of the National Coordinator for Health IT explains her career background and the early days of her government tenure. Continue Reading

Note: This article originally appeared in TechTarget. Click for link here.

Originally Posted at: Guide to business intelligence and health IT analytics by analyticsweekpick

Enhancing CRM with Real-Time, Distributed Data Integrations

The world of CRM is relatively slow to change. These repositories still excel at making available numerous types of largely historical data based on user accounts, frequently asked questions, manual notes, and databases containing these and other forms of customer information.

According to UJET CEO Anand Janefalkar, however, CRM is much less effective for real-time data, particularly those spawned from heterogeneous settings involving contemporary applications of the Internet of Things and mobile technologies: “It just takes a different level of focus to not only reduce the latency, not only shift it’s intent, but also have a specific focus on real-time interactions and user experience.”

Nonetheless, there are a number of contemporary developments taking place within the CRM space (and that for customer service in general) that are designed to enrich the customer experience and the service organizations provide their endpoint customers in velocities equitable to that of modern mobile and big data technologies.

Prudent usage of these mechanisms produces “bi-directional, smart, high-bandwidth communication so that way, there are no artificial limits, and there’s all of the options available for someone to really curate and configure their vision of the user journey,” Janefalkar mentioned.

Embedded, Cloud-Based Data Integrations
Embedding contemporary contact center options, typically in the form of widgets or adaptors, inside of CRM makes them suddenly viable for a host of real-time data sources. Many contact center solutions are facilitated through the cloud, so that they offer omni-channel experiences in which users can communicate with the enterprise via text, chats, mobile apps, web sites, phone calls, and just about any other form of electronic communication. Highly competitive platforms “design an extremely meticulous user experience to enable agents and customers to communicate visually and contextually,” Janefalkar said.

By embedding the adaptors for these solutions into CRM, organizations can now make available an assortment of low latent data which otherwise would have proved too arduous to assemble quickly enough—and which can drastically improve customer service. Examples of these data sources include “photos, videos, screenshots, sensor data, [which] is either requested by an agent or sent from the web site or the smart phone app to the agent,” Janefalkar revealed. “All of that gets stored into the CRM in real time.” With this approach, CRM is suddenly equipped with a multitude of largely unstructured data to associate with specific customers.

Decentralized Use Cases
The practical business value of enhancing CRM with low latent data integrations from distributed sources varies according to verticals, yet is always almost demonstrable. Perhaps the most significant factor about this methodology is it enables for low latent integrations of data from distributed sources outside of enterprise firewalls. In insurance, for example, if a customer gets into a fender bender, he or she can use a mobile application to present digital identification to the law enforcement officer summoned, then inform the process with contextual and visual information regarding the encounter. This information might include photographs or even videos of the scene, all of which is transmitted alongside any other digital information attained at the time (such as the other party’s contact and insurance information), and embedded into “the case or the ticket,” Janefalkar said.

The resulting workflow efficiency contributes to faster resolutions and better performance because “when the first agent is contacted, they take this information and it gets logged into the CRM,” Janefalkar explained. “And then, that gets passed over to a claims assessor. All that information’s already there. The claims assessor doesn’t have to call you back and ask the same questions, ask you to send an email with the photos that you have. Obviously, since it’s after the fact you wouldn’t have access to a video of the site, because you may not have taken it.”

Visual and Contextual Data Integrations
The rapid integration of visual and contextual decentralized data inside CRM to expedite and improve customer service is also an integral approach to handling claims of damaged or incorrect items from e-commerce sites. There’s also a wide range of applicability in other verticals, as well.

The true power of these celeritous integrations of data within CRM is they expand the utility of these platforms, effectively modernize them at the pace of contemporary business, and “make them even better by providing a deep integration into the CRMs so that all of the data and business rules are fetched in real time, so that the agent doesn’t have to go back and forth between different tabs or windows,” Janefalkar said. “But also, when the conversation is done and then the photos and the secure information, they’re not going through any different source. It gets completely archived from us and put back into the source of truth, which usually is the CRM.”

Source: Enhancing CRM with Real-Time, Distributed Data Integrations by jelaniharper

New Mob4Hire Report “The Impact of Mobile User Experience on Network Operator Customer Loyalty” Ranks Performance Of Global Wireless Industry

Mob4Hire, in collaboration with leading customer loyalty scientist Business Over Broadway, today announced its Summer Report 2010 of its “Impact of Mobile User Experience on Network Operator Customer Loyalty” international research, conducted during the Spring. The 111-country survey analyzes the impact of mobile apps across many dimensions of the app ecosystem as it relates to customer loyalty of network operators.

Read the full press release here: http://www.prweb.com/releases/2010/08/prweb4334684.htm; The report is available at http://www.mob4hire.com/services/global-mobile-research for only $495 Individual License (1-3 people), $995 Corporate License (3+ people).

Source by bobehayes

Life Might Be Like a Box of Chocolates, But Your Data Strategy Shouldn’t Be

“My momma always said, “Life was like a box of chocolates. You never know what you’re gonna get.” Even if everyone’s life remains full of surprises, the truth is that what applied to Forrest Gump in the 1994 movie by Robert Zemeckis, shouldn’t apply to your data strategy. As you’re making the very first steps into your data strategy, you need to first know what’s inside your data. And this part is critical.  To do so, you need the tools and methodology to step up your data-driven strategy.

<<ebook: Download our full Definitive Guide to Data Governance>>

Why Data Discovery?

With increased affordability and accessibility of data storage over recent years, data lakes have increased in popularity. This left IT teams with a growing number of diverse known and unknown datasets polluting the data lake in volume and variety every day. As a consequence, everyone is facing a data backlog.  It can take weeks for IT teams to publish new data sources in a data warehouse or data lakes. At the same time, it takes hours for line-of-business workers or data scientists to find, understand and put all that data into context. IDC found that only 19 percent of the time spent by data professionals and business users can really be dedicated to analyzing information and delivering valuable business outcomes

Given this new reality, the challenge is now to overcome these obstacles by bringing clarity, transparency and accessibility to your data as well as to extract value from legacy systems and new applications alike. Wherever the data resides (in a traditional data warehouse or hosted in a cloud data lake), you need to establish proper data screening, so you can get the full picture and make sure you have the entire view of the data flow coming in and out your organization.

Know Your Data

When it’s time to get started working on your data, it’s critical to start exploring the different data sources you wish to manage. The good news is that the newly released Talend Data Catalog coupled with the Talend Data Fabric is here to help.

As mentioned in this post, Talend Data Catalog will intelligently discover all the data coming into your data lake so you get an instant picture of what’s going on in any of your datasets.

One of the many interesting use cases of Talend Data Catalog is to identify and screen any datasets that contain sensitive data so that you can further reconcile them and apply data masking, for example, to enable relevant people to use them within the entire organization. This will help reduce the burden of any data team wishing to operationalize regulations compliance across all data pipelines. To discover more about how Talend Data Catalog will help to be compliant with GDPR, take a look at this Talend webcast.

Auto Profiling for All with Data Catalog

Auto-profiling capabilities of Talend Data Catalog facilitate data screening for non-technical people within your organization. Simply put, the data catalog will provide you with automated discovery and intelligent documentation of the datasets in your data lake. It comes with easy to use profiling capabilities that will help you to quickly assess data at a glance. With trusted and auto profiled datasets, you will have powerful and visual profiling indicators, so users can easily find and the right data in a few clicks. 

Not only can Talend Data Catalog bring all of your metadata together in a single place, but it can also automatically draw the links between datasets and connect them to a business glossary. In a nutshell, this allows organizations to:

  • Automate the data inventory
  • Leverage smart semantics for auto-profiling, relationships discovery and classification
  • Document and drive usage now that the data has been enriched and becomes more meaningful

Go further with Data Profiling

Data profiling is a technology that will enable you to discover your datasets in-depth and accurately assess multiple data sources based on the six dimensions of data quality. It will help you to identify if and how your data is inaccurate, inconsistent, incomplete.

Let’s put this in context. Think about a doctor’s exam to assess a patient’s health. Nobody wants to be in the process of having surgery without a precise and close examination. The same applies to data profiling. You need to understand your data before fixing it. As data will often come into the organization as either inoperable, in hidden formats, or unstructured an accurate diagnosis will help you to have a detailed overview of the problem before fixing it. This will save your time for you, your team and your entire organization because you will have primarily mapped this potential minefield.

Easy profiling for power users with Talend Data Preparation: Data profiling shouldn’t be complicated. Rather, it should be simple, fast and visual. For use cases such as Salesforce data cleansing, you may wish to gauge your data quality by delegating some of the basic data profiling activities to business users. They will then be able to do quick profiling on their favorite datasets. With tools like Talend Data Preparation, you will have powerful yet simple built-in profiling capabilities to explore datasets and assess their quality with the help of indicators, trends and patterns.

Advanced profiling for data engineers: Using Talend Data Quality in the Talend Studio, data engineers can start connecting to data sources to analyze their structure (catalogs, schemas, and tables), and stores the description of their metadata in its metadata repository. Then, they can define available data quality analysis including database, content analysis, column analysis, table analysis, redundancy analysis, correlation analysis, and more. These analyses will carry out the data profiling processes that will define the content, structure, and quality of highly complex data structures. The analysis results will be then displayed visually as well.

To go further into data profiling take a look at this webcast: An Introduction to Talend Open Studio for Data Quality.

Keep in mind that not your data strategy should first and foremost start with data discovery. Failure to profile your data would obviously put your entire data strategy at risk. It’s really about analyzing the ground to make sure your data house could be built on solid foundations.

The post Life Might Be Like a Box of Chocolates, But Your Data Strategy Shouldn’t Be appeared first on Talend Real-Time Open Source Data Integration Software.

Originally Posted at: Life Might Be Like a Box of Chocolates, But Your Data Strategy Shouldn’t Be

Tutorial: Using R for Scalable Data Analytics

At the recent Strata conference in San Jose, several members of the Microsoft Data Science team presented the tutorial Using R for Scalable Data Analytics: Single Machines to Spark Clusters. The materials are all available online, including the presentation slides and hands-on R scripts. You can follow along with the materials at home, using the Data Science Virtual Machine for Linux, which provides all the necessary components like Spark and Microsoft R Server. (If you don’t already have an Azure account, you can get $200 credit with the Azure free trial.)

The tutorial covers many different techniques for training predictive models at scale, and deploying the trained models as predictive engines within production environments. Among the technologies you’ll use are Microsoft R Server running on Spark, the SparkR package, the sparklyr package and H20 (via the rsparkling package). It also touches on some non-Spark methods, like the bigmemory and ff packages for R (and various other packages that make use of them), and using the foreach package for coarse-grained parallel computations. You’ll also learn how to create prediction engines from these trained models using the mrsdeploy package.

Mrsdeploy

The tutorial also includes scripts for comparing the performance of these various techniques, both for training the predictive model:

Training

and for generating predictions from the trained model:

Scoring

(The above tests used 4 worker nodes and 1 edge node, all with with 16 cores and 112Gb of RAM.)

You can find the tutorial details, including slides and scripts, at the link below.

Strata + Hadoop World 2017, San Jose: Using R for scalable data analytics: From single machines to Hadoop Spark clusters

 

 

Source

Startup Movement Vs Momentum, a Classic Dilemma

Movement Vs Momentum, a classic startup dilemma

In my last post on 80/20 Rules for Startup, I briefly touched an area on Movement Vs Momentum. I got some interest to put some more light into it. So here it goes. Startup is all about working hard to validate hypothesis, as you solve one, another one pops up. So, it should be all about validating highest priority hypothesis one after the other. Startups that find the fastest route through problem solving to revenue win. In this struggle, startups work really hard to move and establish a business. Not all hard work translates to business, which is the primary differentiation between movement and momentum.

 

During this never ending run to make startups successful, it is extremely important to prioritize and not spend time running in circles. So, it is very important to differentiate movement from momentum. To a working brain, it is very easy to confuse momentum with motion as both appear to be identical traits. Momentum is moving things in desired direction, whereas motion/movement is state of doing something. Not every motion is a momentum, which is why sometimes, ever increasing mountains of hours of work results in lesser and lesser accomplishments. Startups are no different; they need some smart help to make them work efficiently to success.

 

This is why we need to understand how startups could work smartly and waste less. Sometimes, it is not only about moving faster but also moving smarter. It is also important to understand it is very difficult to differentiate movement from momentum. As we all know, all momentums are movement but not all movements are momentums. So, instead of making long and inflated expectations to validate before you go out and launch, make small, edible assumptions and keep validating it to prove your idea’s validity. Make your chase after the most important assumptions first. Startups in their early stages, makes several zig-zag turns to diversion and get overworked. Every effort should be made to put the best effort in right direction to help startups save resource and get to result quickly.

So, following 5 are the signs that your startup could be moving but might not have the momentum:

1. You have not spoken to any prospective customers/users on the need of your product. Many startups are suffering from it. Many a times startups are focused in building the best product and do not get their product idea validated early on. This to me is the primary reason most of the startups struggle. They invest a lot of time into building product and defining the feature set and that too without much validation. Sure, any disruptive technology often defines and creates its own market. But how many companies we know that reaches Foursquare, Pinterest, Uber, AirBNB,  or Twitter stardom? The same probability applies to new and upcoming startups. The safer approach is to validate the problem-solution mix with the prospective clients. May be that would result in some quick iterations that could magnify the impact. One should remember that the best ideas often come from clients/prospects. So, if startups are investing a lot of time building product and not getting it validated, they could be moving but just not in the right direction which could increase chances of failure.

2. Your product has several features, solving various issues and going through a long development cycle before customer sees what you are up to. This is another issue plaguing some good teams in getting their products out. Last week, I spoke with 2 startups and both are very ambitious on what they are building and not yet ready to let their prospect see their product. They are all heads down perfecting the product. This is another sign which says that you could be overworking on problem. Remember the fact that your ordinary could be someone’s awesome, and there is no other way to check that than getting your prospects on the platform and allowing them to play a little. This will not only help you with some fundamental issues that crop up in early development but also help you in staying close to the need of your prospects/customers. This has always been the recipe of success for startups. So, stop adding lot of features and work on the one and only crazy feature that solves some real problem for your customers/users. This will help you get the product in hands of your customer quickly and efficiently and save you from wasting time on overworking or overthinking.

3. You are investing majority of your time executing but not focusing enough on planning how to best execute. I am sure, we recognize this problem. As a coder/doer, it is my tendency as well to jump on solving the problem, instead of taking a step back and thinking about the best strategy to solve. Many times the best solution is not the first one that comes to mind. So, if you find yourself doing more and planning less, it should be a red flag to check and revisit the strategy. Needless to say, doing more also works, but just more at the mercy of the probability of startups picking the right strategy. So, it is a good idea to revisit the execution strategy, may be you will find faster, cheaper and optimal way to execute the same thing.

4. You are not iterating enough. Yes, iteration also defines if you are working more than you should. A good execution strategy should make small hypothesis, validate them and keep on iterating till no further assumption are left unchecked. If you have one product and you are working on it forever without revisiting your strategy, it could be a problem. Similar to the point made above, first solution or your current solution might not be the best way to approach a problem, so it is important to keep yourself open to iterate your work and chase after best way to execute things. So, if you have been working really hard on something and not yet iterated on your code, it could be a sign that you may just be moving in circles and not moving in the direction to a successful startup.

5. You are still not working towards getting your first check. This might not apply to all, but you could read it in any way you can. Idea is that a startup should deliver real value, having paid customer or satisfied user endorses that value. The quickly you get to that stage, the faster you will start seeing the value needed you to move on. If you are not getting your first paid engagement quickly, it could make you spin in circles as you have still not validated your idea. It is still not known if someone will actually pay for such service. It is always great to get to the stage where you are delivering the real value to real company, user etc. Missing that could mean unsure state of your startup in delivering any value.

 

Now, you could argue, life is good on the other side of fence as well. Sure, startups could succeed in any shape, form, ideology or strategy. It is just a good strategy to follow a path where risk could be minimized. Getting a startup to succeed is anyone’s guess but having a more calculated strategy will make the success that is much more predictable and certain. So, it does not hurt to adopt some strategy which helps you gain momentum and not just move in any direction.

As a treat, here is a video on 10 things startup should focus on.

Source by v1shal

Data center location – your DATA harbour

data center location choice

When it comes to storing and processing your business’ data using a 3rd party data center and/or hosting providers, there are many factors to consider and not all of them are verifiable by you with complete guarantee of being true, but there is one aspect that you can investigate pretty easily and get a decent idea about. Anyone who either is a home owner or is familiar with the real estate business knows that when purchasing property there is one factor that has a tremendous influence on the price and it is …you probably guessed it by now “location, location, location” that’s what it is all about.

Considering that almost no business can operate today without being dependent on data processing, storage and mobile data access and the information technology infrastructure has become a commodity we tend to employ 3rd party providers to host our data in their facilities. The importance of your data whereabouts has become a vital factor in making a choice of colocation or cloud provider. Have a look at the “Critical decision making points when choosing your IT provider” and let’s focus on the location factor in your decision making process on whom to entrust your data.

Considering the location
There certain key factors to consider when it comes to the location of the provider facilities in order to make the most suitable choice for your business:

  1. Natural disasters: What is the likelihood of environmental calamities like hurricanes, tornadoes, catastrophic hail, major flooding and earthquakes in the area historically and statistically? Natural disaster hazards are a serious threat due to human inability to always forecast them and complete lack of control of these events. Having a disaster recovery or a fail over site in a location that is prone to natural disasters is dangerous and defeats the purpose of this precaution. If your primary data center is located in an accident prone area you should make sure that your disaster recovery and backup sites are outside of high risk zones.
  2. Connectivity and latency: Location of a data center will have a tremendous impact on the selection and the amount of available carriers providing network services. Remote and hard to reach locations will suffer from smaller selection. Data centers in the vicinity of Internet Exchange and Network Access Points enjoy rich selection of carriers and thus often lower latency and higher bandwidth at lower costs. Ideally a multi-tenant data center should be considered a carrier neutral facility, which pretty much means that the company owning the data center is entirely independent of any network provider and thus not in direct competition with them. This usually is a great incentive for carriers to want to offer their services in such facility.
  3. Security: Are the facilities in an area that is easy to secure and designated for business? Considering crime and accidents is this low risk area? Is the facility unmarked and hard to spot during a random check? Data center structures should be hard to detect for passersby and should be in areas that are easy to secure and monitor. Areas with high traffic and crime would increase the risk of your data being vulnerable to theft by physical access.
  4. Political and economic stability: Political and economic stability are critical factors in choosing the location. Countries that show a track record of civil distress and economical struggle can prove to be high risk due to the possibility of facilities seizures due to political reasons or higher risk of bankruptcy. Having a threat of government being overthrown and your data seized or your colocation provider filing for Chapter 13 due to currency devaluation are a huge no-go in any way you look at it. Stability is a key to guarantee your business continuity.
  5. Local laws and culture: Both can have a negative impact on your business from losing ownership of data to not being to operate with the same standards that you might be used to and expect. Make sure that you are not breaking any laws in the country your data resides in, what is allowed in one country could be illegal elsewhere. For example infringement and copyright laws vary greatly between countries and some of your customers’ data could put you in a tight spot. Furthermore make sure that the language and cultural barriers will not turn out to be show stoppers when it comes to your daily operations and in need of troubleshooting.
  6. Future growth: You might think it’s hasty and pointless to look into expansion and growth possibilities that your provider can sustain, but nothing is further from the truth. Finding out that your provider cannot accommodate your growth when you do need to expand can turn into a very pricey endeavor leading to splitting your infrastructure across multiple locations or even forcing you to a provider switch. Always make sure the data center has room to grow and not only space wise, but most importantly power wise. Today’s data centers will sooner cope with the shortage of power supply then with a shortage of space. Find out what is their growth potential in space and power and how soon can they be realized, as you need to know how quickly can they adapt to buffer all their customers growth this being a multi-tenant facility.
  7. Redundancy: The location of the facility will also have an impact on its redundancy. To name a few things of importance in order to run mission critical applications would be continuous access to power from multiple sources in case of outages. Multiple fiber entry into the facility will also ensure increased network redundancy. Redundant cooling and environmental controls are also a must to guarantee operationality. These are the basics redundancy factors and depending on your specific requirements for availability you might need to look much deeper than just your facility infrastructure redundancy. Talk to your provider about this, they will offer you advice.
  8. Accessibility: This is a factor important for multiple reasons, from security concerns to daily operations and disaster situations. Accessibility for specialized maintenance staff and emergency services in case of crisis as well as transportation of equipment and supplies within a reasonable amount of time is of vital importance. Facilities that are outside or immediate reach of such amenities have an increased risk of failure and increased recovery time in case of incidents. There could also be a question of the need of your staff physically accessing the equipment, but with today’s data center providers offering remote hands and usually managed services you can avoid such complications and have provider’s local personnel take care of all physical repairs and trouble shooting.

Consequences
The choice of your provider and its location will have severe consequences for your business if things go wrong and they do go wrong. Data Center Knowledge has made an overview of last year’s top 10 Outages and as you can see some big players in the industry have been brought to their knees.
Such disruptions of service carry tremendous costs for the data centers and those expenses have been increasing yearly. Just to give you an idea what kind of money losses I am talking about here take a look at the findings of Emerson Network Power and Ponemon Institute study.

“The study of U.S.-based data centers quantifies the cost of an unplanned data center outage at slightly more than $7,900 per minute.”
– Emerson Network Power, Ponemon Institute

The companies included in this study listed various reasons for outages from faulty equipment to human error, but 30 percent named weather-related reasons as a root cause.
You can bet on it that these losses need to be compensated somewhere may it be by increase in prices or decrease in staff pay (which usually means hiring less qualified personnel) just to name a few corners that could be cut. While at the same time you might be aiming to accomplish more with steady or perhaps decreasing IT budgets, such actions on provider’s side will prove to be counterproductive when trying to achieve your goals.

These are the average costs of the data center outages for the companies running them and we haven’t even touched the damages to businesses that are suffering from such outages. Your services being unavailable could end up costing you money directly by not meeting your SLA’s or losing customers’ orders to your operation coming to an abrupt stop. Long term impact might be a reputation hit and thus decrease in trust of your businesses abilities. Additionally if you are actively running marketing campaigns and invest in trade shows and other public promotional activities, the event of service outage on your part can reduce their impact on increasing you brand popularity. There is a number of factors that influence just how much downtime actually can cost your business and they are strictly tied to how much your entire operation depends on Information Technology all together and how much of it is being affected by outages, this is however outside of the scope of this article.

Bottom line
Depending on the dynamics of your business the requirements from compliance, law, budgeting and even personal bias of the decision makers will render some of the above mentioned factors of more or less importance, but in the end this decision will have a short and long term impact on your business continuity. So if you are involved in the decision making process my advice is: do your homework, talk to the providers, if possible visit the facilities and take in to account the points above. If you are expecting growth or perhaps want to make a switch from legacy systems (if they are a part of technology supporting your current operations) and you are considering leveraging Infrastructure-as-a-Service models then talk to the providers on your list and see if they offer such services and can accommodate your needs. Following these steps you can decrease and even completely avoid data center location choice negatively impacting your business’ bottom line!

Source

Analytic Exploration: Where Data Discovery Meets Self-Service Big Data Analytics

Traditionally, the data discovery process was a critical prerequisite to, yet a distinct aspect of, formal analytics. This fact was particularly true for big data analytics, which involved extremely diverse sets of data types, structures, and sources.

However, a number of crucial developments have recently occurred within the data management landscape that resulted in increasingly blurred lines between the analytics and data discovery processes. The prominence of semantic graph technologies, combined with the burgeoning self-service movement and increased capabilities of visualization and dashboard tools, has resulted in a new conception of analytics in which users can dynamically explore their data while simultaneously gleaning analytic insight.

Such analytic exploration denotes several things: decreased time to insight and action, a democratization of big data and analytics fit for the users who need these technologies most, and an increased reliance on data for the pervasiveness of data-centric culture.

According to Ben Szekely, Vice President of Solutions and Pre-sales at Cambridge Semantics, it also means much more–a new understanding of the potential of analytics, which necessitates that users adopt:

“A willingness to explore their data and be a little bit daring. It is sort of a mind-bending thing to say, ‘let me just follow any relationship through my data as I’m just asking questions and doing analytics’. Most of our users, as they get in to it, they’re expanding their horizons a little bit in terms of realizing what this capability really is in front of them.”

Expanding Data Discovery to Include Analytics
In many ways, the data discovery process was widely viewed as part of the data preparation required to perform analytics. Data discovery was used to discern which data were relevant to a particular query and for solving a specific business problem. Discovery tools provided this information, which was then cleansed, transformed, and loaded into business intelligence or analytics options to deliver insight in a process that was typically facilitated by IT departments and exceedingly time consuming.

However, as the self-service movement has continued to gain credence throughout the data sphere these tools evolved to become more dynamic and celeritous. Today, any number of vendors is servicing tools that regularly publish the results of analytics in interactive dashboards and visualizations. These platforms enable users to manipulate those results, display them in ways that are the most meaningful for their objectives, and actually utilize those results to answer additional questions. As Szekely observed, oftentimes users are simply: “Approaching a web browser asking questions, or even using a BI or analytics tool they’re already familiar with.”

The Impact of Semantic Graphs for Exploration
The true potential for analytic exploration is realized when combining data discovery tools and visualizations with the relationship-based, semantic graph technologies that are highly effective on widespread sets of big data. By placing these data discovery platforms atop stacks predicated on an RDF graph, users are able to initiate analytics with the tools that they previously used to merely refine the results of analytics.

Szekely mentioned that: “It’s the responsibility of the toolset to make that exploration as easy as possible. It will allow them to navigate the ontology without them really knowing they’re using RDF or OWL at all…The system is just presenting it to them in a very natural and intuitive way. That’s the responsibility of the software; it’s not the responsibility of the user to try to come down to the level of RDF or OWL in any way.”

The underlying semantic components of RDF, OWL, and vocabularies and taxonomies that can link disparate sets of big data are able to contextualize that data to give them relevance for specific questions. Additionally, semantic graphs and semantic models are responsible for the upfront data integration that occurs prior to analyzing different data sets, structures and sources. By combining data discovery tools with semantic graph technologies, users are able to achieve a degree of profundity in their analytics that would have previously either taken too long to achieve or not have been possible.

The Nature of Analytic Exploration
On the one hand, that degree of analytic profundity is best described as the ability of the layman business end user to ask much more questions of his or her data in quicker time frames than he or she is used to doing so. On the other hand, the true utility of analytic exploration is realized in the types of questions that user can ask. These questions are frequently ad-hoc, include time-sensitive and real-time data, and are often based on the results of previous questions and conclusions that one can draw from them.

As Szekely previously stated, the sheer freedom and depth of analytic exploration lends itself to so many possibilities on different sorts of data that it may require a period of adjustment to conceptualize and fully exploit. The possibilities enabled by analytic exploration are largely based on the visual nature of semantic graphs, particularly when combined with competitive visualization mechanisms that capitalize on the relationships they illustrate for users. According to Craig Norvell, Franz Vice President of Global Sales and Marketing, such visualizations are an integral “part of the exploration process that facilitates the meaning of the research” for which an end user might be conducting analytics.

Emphasizing the End User
Overall, analytic exploration is reliant upon the relationship-savvy, encompassing nature of semantic technologies. Additionally, it depends upon contemporary visualizations to fuse data discovery and analytics. Its trump card, however, lies in its self-service nature which is tailored for end users to gain more comfort and familiarity with the analytics process. Ultimately, that familiarity can contribute to a significantly expanded usage of analytics, which in turn results in more meaningful data driven processes from which greater amounts of value are derived.

Originally Posted at: Analytic Exploration: Where Data Discovery Meets Self-Service Big Data Analytics by jelaniharper