You thought the human brain was complex? With its ability to retrieve stored memories from years past and forge connections from seemingly disparate topics, it truly seems like the brain is a miraculous organ that rules our everyday lives. But what about the Google brain? Just as intricate and just as ever-changing as a humanâs brain, the Google search engine works to make associations, recommendations, and analysis based upon your search phrases.
However, the question remains: how does Google understand what we want from it? When we ask it a question, how do those millions of results show up for us effortlessly, ranked in terms of relevancy and authority? Every one of us takes this process for granted so in this infographic, weâll look at the inner mechanics of the Google search engine that produces the results you see on your screen.
Gordon Square Communications and WAAT offersÂ tips about how to make the most of online resources to land a dream job â all without spending a penny.
You are probably familiar with Monster.com or Indeed.com, huge jobs websites where you can upload your CV together with other 150 million people every month.
The bad news is that it is unlikely that your CV will ever get seen on one of these websites, discovered attendees of London Technology Week eventÂ Using Tech to Find a Job at Home or Abroad.
âThere are too many people looking for a small number of jobs,â says Sylvia Arthur, Communicator Consultant at Gordon Square Communications and author of the book Get Hired! out on 30th June.
âThe problem is that only 20% of jobs are advertised, while 25% of people are seeking a new job. If you divide twenty by twenty-five, the result of the equation is that you lose,â explains Ms Arthur.
So, how can we use technology to effectively find a job?
The first step is to analyse the âBig Dataâ â all the information that tells us about trends or associations, especially relating to human behaviour.
For example, if we were looking for a job in IT, we could read in the news that a new IT company has opened in Shoreditch, and from there understand that there are new IT jobs available in East London.
Big Data also tells us about salaries and cost of living in different areas, or what skills are required.
âRead job boards not as much to find a job as to understand what are the growing sectors and the jobs of the future,â is Ms Arthurâs advice.
Once you know where to go with the skills you have, you need to bear in mind that most recruiters receive thousands of CVs for a single job and they would rather ask a colleague for a referral than scan through all of them.
So if you are not lucky enough to have connections, you need to be proactive and make yourself known in the industry. âComment, publish, be active in your area, showcase your knowledge,â says Ms Arthur.
âAnd when you read about an interesting opportunity, be proactive and contact the CEO, tell them what you know and what you can do for them. LinkedIn Premium free trial is a great tool to get in touch with these people.â
Another good advice is to follow the key people in your sector on social media. Of all the jobs posted on social media, 51% are on Twitter, compared to only 23% on LinkedIn.
And for those looking for jobs in the EEA, it is worth checking out EURES, a free online platform where job seekers across Europe are connected with validated recruiters.
âIn Europe there are some countries with shortage of skilled workforce and others with high unemployment,â explains Grzegorz Gonciarz and Vamory Traore from WAAT.
âThe aim of EURES is to tackle this problem.â
Advisers with local knowledge also help jobseekers to find more information about working and living in another European country before they move.
As for recent graduates looking for experience, a new EURES program called Dropâpin will start next week.
The program aims to fill the skills gap that separates young people from recruitment through free training sessions both online and on location.
To read the original article on London Technology Week, click here.
Today we’re comparing three soft drink brands: Coca Cola, Pepsi and Red Bull. All are big names in the beverages industry. We’ll use BuzzTalk’s benchmark tool to find out which brand is talked about the most and how people feel about this brand. As you probably know it’s not enough if people talk about your brand. You want them to be positive and enthusiastic.
Coca Cola has the largest Share of Voice
In order to benchmark these brands we’ve created three Media Reports in BuzzTalk. These are all set-up the same way. We include news sites, blogs, journals and Twitter for the time period starting at 23 September 2013. In these reports we didn’t include printed media.
As you can see Coca Cola (blue) is the dominant brand online. Nearly 45% of the publications mention Coca Cola. Red Bull (green) and Pepsi Cola (red) follow close to each other at 29 and 26%.
Benchmarking the Buzz as not all buzz is created equal
Coca Cola doesn’t dominate everywhere on the web. If we take a closer look the dominance of Coca Cola is predominantly caused by it’s share of tweets. When we zoom in on news sites we notice it’s Red Bull who’s got the biggest piece of the pie. On blogs (not shown) Coca Cola and Red Bull match up.
Is Coca Cola’s dominance on Twitter due to Beliebers?
About 99,6% of Coca Cola related publications is on Twitter. Most of these tweets relate to the Coca-Cola.FM radio station in South America in relation with Justin Bieber. On 12th November Coca Cola streamed the concert of this young pop star and what we’re seeing here is the effect of ‘Beliebers’ on the share of voice.
The Coca Cola Christmas effect can still be detected
The Bieber effect is even stronger than christmas (42884 versus 2764 tweets).
Last year we demonstratedÂ what’s marking the countdown to the holidays: it’s the release of the new Coca Cola TV-commercial.Â What we noticed then was a sudden increase in the mood state ‘tension’. In the following graph you can see it’s still there (Coca Cola is still in blue).
The mood state ‘tension’ relates to both anxiety and excitement. It’s the emotion we pick up during large product releases. If this is the first time you’re reading about mood states we recommend reading this blogpost as an introduction. Mood states are an interesting add-on to sentiment to be used in predictions about human behavior. The ways in which actual predictions can be made are subject of ongoing research.
How do we feel about these brands?
Let’s examine some more mood states and see whether we can find a mood state that’s clearly associated with a brand. As you can see in the graphs below each soft drink brand gets it fair share of mood state tension. Tension not specific for Coca Cola, though it is more prominent during the countdown towards christmas.
Pepsi Cola evokes the most ‘confusion’ and slightly more ‘anger’.Â The feelings of confusion are often related to feeling quilty after drinking (too much) Pepsi.
Red Bull generates the most mood states as it’s dominating not only for fatigue, but also – to a lesser extend – for depression, tension and vigor.
Striking is the amount of publications for Red Bull in which the mood state fatigue can be detected.Â They say “Red Bull gives you wings” and this tag line has become famous. People now associated tiredness with the desire for Red Bull. But people also blame Red Bull for (still) feeling tired or more tired. At least it’s good to see Red Bull also has it’s share in the ‘vigor’ mood state department.
To read the original article on BuzzTalk, click here.
Yes, you read it right. It is a light title for a serious problem. I spoke with big-data scientists in some fortune 100 companies and tried to poke them to learn their strategy on how they want to tackle big data & how they are figuring out the method/tool that works best for them. It was interesting to hear their story, to learn all the options that are available to them and how they ended up picking the tool. I was trying to understand/resolve the problem and then, one night I saw my 2 year daughter cry non-stop. We all huddled to find what is troubling her. Then it occurred to me that, it is the similar situation that companies are facing today.
First, let me explain what happened, and then I will try to make the connection on why and how it is relevant. On one blue moon, my daughter who has just turned two, started acting fussy compared to her normal state. There were some guests at home, so as a normal parent we started figuring out what is bothering her to calm her down, but nothing seems to be working. One of guest put forward some suggestion for the reason for her fussiness, and then there were other theories that got added. All of us were trying to find the right reason for her fussiness from our individual experience and soon, a collaboration of various tricks worked and she found her peace. Not sure if the reason for the fussiness is any important here but the good part is that she became relaxed.
Now, this is the problem that most of the companies are facing today. Like my daughter they all are fussy as they all have a big-data problem, they have lot of unknowns hiding in their data. They all can barely understand how to find them, let alone the way to put them to use. And if we compare visualization tool to guests, parent and everybody around my daughter trying to figure out their own version of what is happening- Itâs a chaos. If you let one of the many figure out their version of what it is, they may be off for quite some time that could be painful, discomforting and wrong for some time. On the other hand, a model of collective wisdom worked best as everyone gave their quick thoughts which helped us collaborate and iterate on the information and figure out the best path.
Now consider companiesâ using multiple tools on their problem, and babysitting for days/months/years costing time, money and resources. These tools could end up becoming the best nanny there is or the worst one. Outcome is anyoneâs guess, but if you get a good tool, will you ever find out if there is a better or best tool out there. That is the problem big-data industry is facing today. Unlike their other traditional appliances/tools, big-data tool requires considerable cash influx and time/resource commitment, so going through long sales cycle and marrying a single tool should not be high on their charts.
Before you get onto your hunting, make sure to create a small data set that best defines your business chaos. The data should contain almost every aspect of your business in a way that it could work as a good recruiting tool for data discovery platform. I will go a bit deeper into what entails some good preparatory steps before you go shopping. But for this blog, letâs make sure we have our basic data set ready for testing the tools.
Now, the best approach in recruiting best visualization framework should go through one of the three ways:
1. Hiring an independent consulting, like we consult pediatrics for their expertise in dealing with baby problems, we could hire a specialized shop that could work closely with your business, and other data visualizations vendors. These consultants could help companies recruit those tools by acting as a mediation layer to help you filter out any bias, or technological challenge that restricts your decision making capabilities. These consultants could sit with your organizations, understand itâs requirements and go for tool fishing recommending the best tool that suits your needs.
2. Maximizing the use of trial periods for platform. Just as we quickly turn around things and validate which method could pacify the kids quickly and not get into long cycle of failures, we could treat It is the same. This technique is painful but still does relatively less damage than going full throttle with one tool on long journey of failure. This approach prepares you to have a mindset, tactical and strategic agenda to hire/fire tool fast and pick the best tool that is delivering maximum value per dataset. This technique is relatively expensive among the three and it could introduce some bias in the decision making.
3. Go with platform plays: Similar to pediatric clinic, you could find almost everything that could help pacify the situation. Similarly, vendors that provide you with platform system to help you experiment all those methodologies and let you pick the best combination that will work for your system. These vendors are not stuck to any visualization techniques but they make everything available to clients and help them get stuck with best package out there. Having locked at such system you could make sure that your business interest should get the highest precedence and not any specific visualization/discovery technique. For keeping the blog clean from any shout outs, I would keep the company name out of the text, but do let me know if you are interested to know which all companies provide platform play for you to experiment with.
And by that you could make the baby stop crying in fastest, most cost effective and business responsive manner.
Democratizing Big Data refers to the growing movement of making products and services more accessible to other staffers, such as business analysts, along the lines of “self-service business intelligence” (BI).
In this case, the democratized solution is “the all-in-one Lavastorm Analytics Engineplatform,” the Boston company said in an announcement today announcing product improvements. It “provides an easy-to-use, drag-and-drop data preparation environment to provide business analysts a self-serve predictive analytics solution that gives them more power and a step-by-step validation for their visualization tools.”
It addresses one of the main challenges to successful Big Data deployments, as listed in study after study: lack of specialized talent.
“Business analysts typically encounter a host of core problems when trying to utilize predictive analytics,” Lavastorm said. “They lack the necessary skills and training of data scientists to work in complex programming environments like R. Additionally, many existing BI tools are not tailored to enable self-service data assembly for business analysts to marry rich data sets with their essential business knowledge.”
That affirmation has been confirmed many times. For example, a recent report by Capgemini Consulting, “Cracking the Data Conundrum: How Successful Companies Make Big Data Operational,” says that lack of Big Data and analytics skills was reported by 25 percent of respondents as a key challenge to successful deployments. “The Big Data talent gap is something that organizations are increasingly coming face-to-face with,” Capgemini said.
Other studies indicate they haven’t been doing such a good job facing the issue, as the self-service BI promises remain unfulfilled.
Enterprises are trying many different approaches to solving the problem. Capgemini noted that some companies are investing more in training, while others try more unconventional techniques, such as partnering with other companies in employee exchange programs that share more skilled workers or teaming up with or outright acquiring startup Big Data companies to bring skills in-house.
Lavastorm, meanwhile, uses the strategy of making the solutions simpler and easier to use. “Demand for advanced analytic capabilities from companies across the globe is growing exponentially, but data scientists or those with specialized backgrounds around predictive analytics are in short supply,” said CEO Drew Rockwell. “Business analysts have a wealth of valuable data and valuable business knowledge, and with the Lavastorm Analytics Engine, are perfectly positioned to move beyond their current expertise in descriptive analytics to focus on the future, predicting what will happen, helping their companies compete and win on analytics.”
The Lavastorm Analytics Engine comes in individual desktop editions or in server editions for use in larger workgroups or enterprise-wide.
New predictive analytics features added to the product as listed today by Lavastorm include:
- Linear Regression: Calculate a line of best fit to estimate the values of a variable of interest.
- Logistic Regression: Calculate probabilities of binary outcomes.
- K-Means Clustering: Form a user-specified number of clusters out of data sets based on user-defined criteria.
- Hierarchical Clustering: Form a user-specified number of clusters out of data sets by using an iterative process of cluster merging.
- Decision Tree: Predict outcomes by identifying patterns from an existing data set.
These and other new features are available today, Lavastorm said, with more analytical component enhancements to the library on tap.
The company said its approach to democratizing predictive analytics gives business analysts drag-and-drop capabilities specifically designed to help them master predictive analytics.
“The addition of this capability within the Lavastorm Analytics Engine’s visual, data flow-driven approach enables a fundamentally new method for authoring advanced analyses by providing a single shared canvas upon which users with complementary skill sets can collaborate to rapidly produce robust, trusted analytical applications,” the company said.
About the Author-Â David Ramel is an editor and writer for 1105 Media.
Originally posted via “Lavastorm Democratizing Big Data Analytics in Face of Skills Shortage”
Organizations will contend with an abundance of trends impacting data governance in the coming year. The data landscape has effectively become decentralized, producing more data, quicker, than it ever has before. Ventures in the Internet of Things and Artificial Intelligence are reinforcing these trends, escalating the need for consistent data governance. Increasing regulatory mandates such as the General Data Protection Regulation (GDPR) compound this reality.
Other than regulations, the most dominant trend affecting data governance in the new year involves customer experience. The demand to reassure consumers that organizations have effective, secure protocols in place to safely govern their data has never been higher in the wake of numerous security breaches.
According to Stibo Systems Chief Marketing Officer Prashant Bhatia, âOur expectations, both as individuals as well as from a B2B standpoint, are only getting higher. In order for companies to keep up, theyâve got to have [governance] policies in place. And, consumers want to know that whatever data they share with a third party is trusted and secure.â
The distributed nature of consumer experienceâand the heightened expectations predicated on itâis just one of the many drivers for homogeneous governance throughout a heterogeneous data environment. Governing that data in a centralized fashion may be the best way of satisfying the decentralized necessities of contemporary data processes because, according to Bhatia:
âNow youâre able to look at all of those different types of data and data attributes across domains and be able to centralize that, cleanse it, get it to the point where itâs usable for the rest of the enterprise, and then share that data out across the systems that need it regardless of where they are.â
Metadata Management Best Practices
The three preeminent aspects of a centralized approach to governing data are the deployment of a common data model, common taxonomies, and âhow you communicate that data forâ¦integration,â Bhatia added. Whether integrating (or aggregating) data between different sources either within or outside of the enterprise, metatdata management plays a crucial role in doing so effectually. The primary advantage metadata yields in this regards is in contextualizing the underlying data to clarify both their meaning and utility. âMetadata is a critical set of attributes that helps provide that overall context as to why a piece of data matters, and how it may or may not be used,â Bhatia acknowledged. Thus, in instances in which organizations need to map to a global taxonomyâsuch as for inter-organizational transmissions between supply chain partners or to receive data from global repositories established between companiesâinvolving metadata is of considerable benefit.
According to Bhatia, metadata âhas to be accounted for in the overall mapping because ultimately it needs to be used or associated with throughout any other business process that happens within the enterprise. Itâs absolutely critical because metadata just gives you that much more information for contextualization.â When attempting to integrate or aggregate various decentralized sources, such an approach is also useful. Mapping between varying taxonomies and data models becomes essential when utilizing sources from decentralized environments into a centralized one, as does involving metadata in these efforts. Mapping metadata is so advantageous because âthe more data you can have, the more context you can have, the more accurate it is, [and] the better youâre going to be able to use it within aâ¦ business process going forward,â Bhatia mentioned.
Forresterâs 2018 predictions identify the GDPR as one of the fundamental challenges organizations will contend with in the coming year. The GDPR issue is so prominent because it exists at the juncture between a number of data governance trends. It represents the greater need to satisfy consumer expectations as part of governance, alludes to the nexus between governance and security for privacy concerns, and illustrates the overarching importance of regulations in governance programs. The European Unionâs GDPR creates stringent mandates about how consumer information is stored and what rights people have regarding data about them. Its penalties are some of the more convincing drivers for formalizing governance practices.
âOnce the regulation is in place, you no longer have a choice,â Bhatia remarked about the GDPR. âWhether you are a European company or you have European interactions, the fact of the matter is youâve got to put governance in place because the integrity of what youâre sending, what youâre receiving, when youâre doing it, and how youâre doing itâ¦All those things no longer becomes a âdo I need toâ, but now âI have toâ.â Furthermore, the spring 2018 implementation of GDPR highlights the ascending trend towards regulatory complianceâand stiff penaltiesâassociated with numerous vertical industries. Centralized governance measures are a solution for providing greater utility for the data stewardship and data lineage required for compliance.
The focus on regulations and distributed computing environments only serves to swell the overall complexity of data stewardship in 2018. However, dealing with decentralized data sources in a centralized manner abets the role of a data steward in a number of ways. Stewards primarily exist to implement and maintain the policies begat from governance councils. Centralizing data management and its governance via the plethora of means available for doing so today (including Master Data Management, data lakes, enterprise data fabrics and others) enable the enterprise to âcultivate the data stewardship aspect into something thatâs executable,â Bhatia said. âIf you donât have the tools to actually execute and formalize a governance process, then all you have is a process.â Conversely, the stewardship role is so pivotal because it supervises those processes at the point in which they converge with technological action. âIf you donât have the process and the rules of engagement to allow the tools to do what they need to do, all you have is the technology,â Bhatia reflected. âYou donât have a solution.â
One of the foremost ways in which data stewards can positively impact centralized data governanceâas opposed to parochial, business unit or use case-based governanceâis by facilitating data provenance. Doing so may actually be the most valuable part of data stewardship, especially when one considers the impact of data provenance on regulatory compliance. According to Bhatia, provenance factors into âensuring that what was expected to happen did happenâ in accordance to governance mandates. Tracing how data was used, stored, transformed, and analyzed can deliver insight vital to regulatory reporting. Evaluating data lineage is a facet of stewardship that âmeasures the results and the accuracy [of governance measures] by which we can determine have we remained compliant and have we followed the letter of the law,â commented Bhatia. Without this information gleaned from data provenance capabilities, organizations âhave a flawed process in place,â Bhatia observed.
As such, there is a triad between regulations, stewardship, and data provenance. Addressing one of these realms of governance will have significant effects on the other two, especially when leveraging centralized means of effecting the governance of distributed resources. âThe ability to have a history of where data came from, where it might have been cleansed and how it might emerge, who it was shared with and when it was shared, all these different transactions and engagements are absolutely critical from a governance and compliance standpoint,â Bhatia revealed.
The complexities attending data governance in the next couple of years show few signs of decreasing. Organizations are encountering more data than ever before from a decentralized paradigm characterized by an array of on-premise and cloud architectures that complicate various facets of governance hallmarks such as data modeling, data quality, metadata management, and data lineage. Moreover, data is produced much more celeritously than before with an assortment of machine-generated streaming options. When one considers the expanding list of regulatory demands and soaring consumer expectations for governance accountability, the pressures on this element of data management become even more pronounced. Turning to a holistic, centralized means of mitigating the complexities of todayâs data sphere may be the most viable means of effecting data governance.
âAs more data gets created the need, which was already high, for having a centralized platform to share data and push it back out, only becomes more important,â Bhatia said.
And, with an assortment of consumers, regulators, and C-level executives evincing a vested interest in this process, organizations wonât have many chances to do so correctly.
I recently wrote about theÂ value of Enterprise Feedback Management vendors. EFM is the process of collecting, managing, analyzing and disseminating different sources (e.g., customers, employees, partners) of feedback. Â EFM vendors help companies facilitate their customer experience management (CEM) Â and voice of the customer (VoC) efforts, hoping to improve the customer experience and increase customer loyalty. Â This week, I take a non-scientific approachÂ to understanding the EFM space and wondered how EFM/CEM vendors try to differentiate themselves from each other.
Using a word cloud-generating site,Â tagxedo.com, I created word clouds for 7 Â EFM/CEM vendors based on content from their respective Web sites.Â Word cloudsÂ are usedÂ to visualize free form text.Â I generated word clouds by simply inputting that vendor’s url into the tagxedo.com site (done on 7/15/2011 – prior to the announcement of the Vovici acquisition by Verint). I used the same tagxedo.com parameters when generating each vendor’s word cloud. For each word cloud, I manually removed company/proper names and trademarked words (e.g., Net Promoter Score) that would easily identify the vendor. The resulting word clouds appear to the right (labeled Provider 1 thorugh 7). These word clouds represent the key words each vendor uses to convey their solutions to the world. The seven vendors I used in this exercise are (in alphabetical order):
Can you match the vendors to their word cloud? Can you even identify the vendor your company uses (given it’s in the list, of course)? Answers to the word cloud matching exercise appear at the end of this post.
Before you read the answers, here is some help. It is clear that there is much similarity among these EFM vendors. They all do similar things; they use technology to capture, analyze and disseminate feedback. Beyond there core solutions, how do they try to differentiate themselves?Â Giving the word clouds the standard inter-ocular test (aka eye-balling the data), I noticed that, although “Customer” appears as a top word for all vendors, there are top words that are unique to a particular vendor:
- Provider 1: Experience and Solutions
- Provider 2: Contact, Center and Partners
- Provider 3: Online
- Provider 4: Engage and Software
- Provider 5: Measure
- Provider 6: Business
- Provider 7: Market and Research
Maybe this differentiation, however subtle, can help you with the matching exercise. Let me know how you did. If you have thoughts on how EFM/CEM vendors can better differentiate themselves from the pack, please share your thoughts. More importantly, how can these vendors provide more value to their customers? One way is to help their clients integrate their technology solutions intoÂ their VoC program. Those EFM vendors who can do that will be more likely to succeed than those who simply want to sell technology as a solution (remember the CRM problem?).
Answers to the EFM/CEM vendor word clouds: Â Medallia (Provider 1);Â Attensity (Provider 2);Â Vovici (Provider 3);Â Allegiance (Provider 4); Mindshare (Provider 5); Satmetrix (Provider 6);Â MarketTools (Provider 7)
LAS VEGAS: Developments in hacking culture and enterprise technology mean that big data-led intelligence defences are the future of the security industry, according to EMC.
David Goulden, CEO of information infrastructure at EMC, made the claim during a press session at EMC World attended by V3.
“The security industry is changing dramatically. If you look to the past, firewall and antivirus technologies came up as the main solution in the second platform,” he said.
“But at this stage in IT the apps were generally used within the enterprise so it was manageable. But this is no longer the case in the third platform.”
Goulden explained that in today’s threat landscape there is no way that companies can keep determined adversaries out of their systems.
He added that to overcome the challenge businesses will have to adopt big data analytics solutions that identify and combat malicious activity on the network.
“The two big challenges in security are big data challenges and based on intelligence. The first is how to securely authenticate who is coming into the network,” he said.
“The second big challenge in security is the identification of anomalies occurring in your network in real time.”
EMC is one of many firms to cite analytics as the future of cyber security.
UK firm Darktrace said during an interview with V3 that businesses’ reliance on perimeter-based defences is a key reason why hackers can spend months at a time in companies’ systems undetected.
A lack of employee awareness regarding security best practice is another common problem at many businesses. Verizon said in March that poor security awareness results in the success of one in four phishing scams.
Taking a swipe at competitors, Goulden boasted that the sweep of data breaches resulting from businesses security practices has led to a rise in the number of companies migrating to RSA analytics solutions.
“We’re a generation ahead of people at this [big data analytics-based security]. Every public breach that has occurred in recent memory has been at companies using our competitors and many have since moved to use RSA technologies,” he said.
He added that the firm has monopolised on this trend, claiming: “75 percent of RSA’s work is aimed at improving security in the future.”
Despite Goulden’s bold claims RSA reported just $4m year-on-year growth in revenue in itsQ1 2015 financials.
RSA took in a total $248m in revenue, marking an overall $165m gross profit during the first three months of 2015.
Goulden’s comments follow the launch of EMC’s Data Domain DD9500 solution and updates for its ProtectPoint and CloudBoost services.
Originally posted via “Big data analytics are the future of cyber security”