27/7/17

Using Python to Drive New Insights and Innovation from Big Data


Data science and machine learning have emerged as the keys to unlocking value in enterprise data assets. Unlike traditional business analytics, which focus on known values and past performance, data science aims to identify hidden patterns in order to drive new innovations. Behind these efforts are  the programming languages used by data science teams to clean up and prepare data, write and test algorithms, build statistical models, and  translate into consumable applications or visualizations. In this regard, Python stands out as the language best suited for all areas of the data science  and machine learning framework.

In a recent white paper “Management’s Guide – Unlocking the Power of Data Science & Machine Learning with Python,” ActiveState – the Open Source Language Company – provides a summary of Python’s attributes in a number of important areas, as well as considerations for implementing Python to  drive new insights and innovation from big data.

When it comes to which language is best for data science, the short answer is that it depends on the work you are trying to do. Python and R are suited for data science functions, while Java is the standard choice for integrating data science code into large-scale  systems like Hadoop. However, Python challenges Java in that respect, and offers additional value as a tool for building web applications. Recently, Go has  emerged as an up and coming alternative to the three major languages, but is not yet as well supported as Python.

In practice, data science teams use a combination of languages to play to the strengths of each one, with Python and R used in varying degrees. The guide includes a brief comparison table highlighting each language in the context of data science.

Companies are not only maximizing their use of data, but transforming into ‘algorithmic businesses’ with Python as the leading language for machine learning. Whether it’s automatic stock trading, discoveries of new drug treatments, optimized resource production or any number of applications  involving speech, text or image recognition, machine and deep learning are becoming the primary competitive advantage in every industry.

Companies are transforming into 'algorithmic businesses' with Python as the leading language for machine learning. Click To Tweet

In the complete white paper, ActiveState covers:

  • Introduction: the Big Data Dilemma
  • Python vs. Other Languages
  • Data Analysis with Python
  • Machine Learning with Python
  • Recommendations

To learn more about introducing Python into your data science technology stack,  download the full white paper.



via insideBIGDATA http://ift.tt/2ueDR7y

Scality Launches Zenko, Open Source Software To Assure Data Control In A Multi-Cloud World

Scality, a leader in object and cloud storage, announced the open source launch of its Scality Zenko, a Multi-Cloud Data Controller. The new solution is free to use and embed into developer applications, opening a new world of multi-cloud storage for developers.

Zenko provides a unified interface based on a proven implementation of the Amazon S3 API across clouds. This allows any cloud to be addressed with the same API and access layer, while storing information in their respective native format. For example, any Amazon S3-compliant application can now support Azure Blob Storage without any application modification. Scality’s vision for Zenko is to add data management controls to protect vital business assets, and metadata search to quickly subset large datasets based on simple business descriptors.

We believe that everyone should be in control of their data,” said Giorgio Regni, CTO at Scality. “Our vision for Zenko is simple—bring control and freedom to the developer to unleash a new generation of multi-cloud applications. We welcome anyone who wants to participate and contribute to this vision.”

Zenko builds on the success of the company’s Scality S3 Server, the open-source implementation of the Amazon S3 API, which has experienced more than 600,000 DockerHub pulls since it was introduced in June 2016. Scality is releasing this new code to the open source community under an Apache 2.0 license, so that any developer can use and extend Zenko in their development.

With Zenko, Scality makes it even easier for enterprises of all sizes to quickly and cost-effectively deploy thousands of apps within the Microsoft Azure Cloud and leverage its many advanced services,” said Jurgen Willis, Head of Product for Azure Object Storage at Microsoft Corp. “Data stored with Zenko is stored in Azure Blob Storage native format, so it can easily be processed in the Azure Cloud for maximum scalability.”

Zenko Multi-Cloud Data Controller expands the Scality S3 Server, and includes:

  • S3 API – Providing a single API set and 360° access to any cloud. Developers want to have an abstraction layer allowing them the freedom to use any cloud at any time. Scality Zenko provides a single unifying interface using the Amazon S3 API, supporting multi-cloud backend data storage both on-premises and in public cloud services. Zenko is available now for Microsoft Azure Blob Storage, Amazon S3, Scality RING and Docker and will be available soon for other cloud platforms.
  • Native format – Data written through Zenko is stored in the native format of the target storage and can be read directly, without the need to go through Zenko. Therefore, data written in Azure Blob Storage or in Amazon S3 can leverage the respective advanced services of these public clouds.
  • Backbeat data workflow – A policy-based data management engine used for seamless data replication, data migration services or extended cloud workflow services like cloud analytics and content distribution. This feature will be available in September.
  • Clueso metadata search – An Apache Spark-based metadata search for expanded insight to understand data. Clueso makes it easy to interpret petabyte-scale data and easily manipulate it on any cloud to separate high-value information from data noise. It provides the ability to subset data based on key attributes. This feature will be available in September.

Application developers looking for design efficiency and rapid implementation will appreciate the productivity benefits of using Zenko. Today, applications must be rewritten to support each cloud, which reduces productivity and makes the use of multiple clouds expensive. With Zenko, applications are built once and deployed across any cloud service.

Cityzen Data provides a data management platform for collecting, storing, and delivering value from all kinds of sensor data to help customers accelerate progress from sensors to services, primarily for health, sport, wellness, and scientific applications,” said Mathias Herberts, co-founder and CTO at Cityzen Data. “Scality provides our backend storage for this and gives us a single interface for developers to code within any cloud on a common API set. With Scality, we can write an application once and deploy anywhere on any cloud.”

 

Sign up for the free insideBIGDATA newsletter.



via insideBIGDATA http://ift.tt/2ukwLOQ

AI Suggests Recipes Based on Food Photos

There are few things social media users love more than flooding their feeds with photos of food. Yet we seldom use these images for much more than a quick scroll on our cellphones. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people’s eating habits. In a new paper the team trained an AI system to look at images of food and be able to predict the ingredients and suggest similar recipes. In experiments the system retrieved the correct recipe 65 percent of the time.

In computer vision, food is mostly neglected because we don’t have the large-scale data sets needed to make predictions,” says Yusuf Aytar, a postdoctoral associate who co-wrote a paper about the system with MIT professor Antonio Torralba. “But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.”

The paper will be presented later this month at the Computer Vision and Pattern Recognition conference in Honolulu. CSAIL graduate student Nick Hynes was lead author alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain. Co-authors include CSAIL post-doc Javier Marin, as well as scientist Ferda Ofli and research director Ingmar Weber of QCRI.

VIDEO

How it works

The Web has spurred a huge growth of research in the area of classifying food data, but the majority of it has used much smaller datasets, which often leads to major gaps in labeling foods. In 2014 Swiss researchers created the “Food-101” data set and used it to develop an algorithm that could recognize images of food with 50 percent accuracy. Future iterations only improved accuracy to about 80 percent, suggesting that the size of the dataset may be a limiting factor. Even the larger data sets have often been somewhat limited in how well they generalize across populations. A database from the City University in Hong Kong has over 110,000 images and 65,000 recipes, each with ingredient lists and instructions, but only contains Chinese cuisine.

The CSAIL team’s project aims to build off of this work but dramatically expand in scope. Researchers combed websites like All Recipes and Food.com to develop “Recipe1M,” a database of over 1 million recipes which were annotated with information about the ingredients in a wide range of dishes. They then used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes. Given a photo of a food item, the team’s system – which they dubbed Pic2Recipe – could identify ingredients like flour, eggs and butter, and then suggest several recipes that it determined to be similar to images from the database.

You can imagine people using this to track their daily nutrition, or to photograph their meal at a restaurant and know what’s needed to cook it at home later,” says Christoph Trattner, an assistant professor at MODUL University Vienna in the New Media Technology Department who was not involved in the paper. “The team’s approach works at a similar level to human judgement, which is remarkable.”

The system did particularly well with desserts like cookies or muffins, since that was a main theme in the database. However, it had difficulty determining ingredients for more ambiguous foods, like sushi rolls and smoothies. It was also often stumped when there similar recipes for the same dishes. For example, there’s dozens of ways to make lasagna, so the team needed to make sure that system wouldn’t “penalize” recipes that are similar when trying to separate those that are different. (One way to solve this was by seeing if the ingredients in each are generally similar before comparing the recipes themselves).

In the future, the team hopes to be able to improve the system so that it can understand food in even more detail. This could mean being able to infer how a food is prepared (i.e. stewed versus diced) or distinguish different variations of foods, like mushrooms or onions. The researchers are also interested in potentially developing the system into a “dinner aide” that could figure out what to cook given a dietary preference and a list of items in the fridge.

This could potentially help people figure out what’s in their food when they don’t have explicit nutritional information,” says Hynes. “For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal.”

The project was funded in part by QCRI, as well as the European Regional Development Fund (ERDF) and the Spanish Ministry of Economy, Industry and Competitiveness.

 

Sign up for the free insideBIGDATA newsletter.



via insideBIGDATA http://ift.tt/2tRGDft

Why Businesses Can No Longer Ignore IoT Security

In this special guest feature, Srikant Menon, Practice Director of Internet of Things (IoT) at Happiest Minds Technologies, discusses how it is imperative for businesses to balance the massive benefits of IoT along with the security risks it poses. While millions of “things” are simple in nature, IoT security is an absolute must and should require an end-to-end approach.

via insideBIGDATA http://ift.tt/2w0ljpN

What Is Artificial Intelligence?

Here is a question I was asked to discuss at a conference last month: what is Artifical Intelligence (AI)?  Instead of trying to answer it, which could take days, I decided to focus on how AI has been defined over the years.  Nowadays, most people probably equate AI with deep learning.  This has not always been the case as we shall see.

Most people say that AI was first defined as a research field in a 1956 workshop at Dartmouth College.  Reality is that is has been defined 6 years earlier by Alan Turing in 1950.  Let me cite Wikipedia here:

The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

The test was introduced by Turing in his paper, "Computing Machinery and Intelligence", while working at the University of Manchester (Turing, 1950; p. 460).[3] It opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[4] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[5] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[6]

 

image

So, the first definition of AI was about thinking machines.  Turing decided to test thinking via a chat. 

The definition of AI rapidly evolved to include the ability to perform complex reasoning and planing tasks.  Early success in the 50s led prominent researchers to make imprudent predictions about how AI would become a reality in the 60s.  The lack of realization of these predictions led to funding cut known as the AI winter in the 70s. 

In the early 80s, building on some success for medical diagnosis, AI came back with expert systems.  These systems were trying to capture the expertise of humans in various domains, and were implemented as rule based systems.  This was the days were AI was focusing on the ability to perform tasks at best human expertise level.  Success like IBM Deep Blue beating the chess world champion, Gary Kasparov, in  1997 was the acme of this line of AI research.

Let's contrast this with today's AI.  The focus is on perception: can we have systems that recognize what is in a picture, what is in a video, what is said in a sound track?  Rapid progress is underway for these tasks thanks to the use of deep learning.  Is it AI still?  Are we automating human thinking?  Reality is we are working on automating tasks that most humans can do without any thinking effort. Yet we see lots of bragging about AI being a reality when all we have is some ability to mimic human perception.  I really find it ironic that our definition of intelligence is that of mere perception  rather than thinking.

 

Granted, not all AI work today is about perception.  Work on natural language processing (e.g. translation) is a bit closer to reasoning than mere perception tasks described above.  Success like IBM Watson at Jeopardy, or Google AlphaGO at Go are two examples of the traditional AI aiming at replicate tasks performed by human experts.    The good news (to me at least) is that the progress is so rapid on perception that it will move from a research field to an engineering field in the coming years.  We will then see a re-positioning of researchers on other AI related topics such as reasoning and planning.  We'll be closer to Turing's initial view of AI.



via Planet big data http://ift.tt/2uwyBdM

Business Intelligence as a Competitive Advantage in the Retail Industry

big dataCompared to other sectors, like finance and technology, retail can be considered a late adopter of the advantages offered by business intelligence to daily processes. This is a paradox, as the operations in retail are some of the most well adjusted for the insight provided by digital dashboards.

Questions like: “Who is your ideal client?”, “What are the products you should promote?” and “Which items should you sell as a bundle?”, “What is the preferred way of paying?” and “How do your clients engage with your brand?”, can all be answered through a BI platform that integrates point of sale data with demographics and interactions from online interfaces.

Why is Business Intelligence a solution for retail?

The value of BI comes from the evolution of retail companies from organizations based on operations to companies built on innovation. The intermediate stages are consolidation, integration, and optimization. This is a journey from ad-hoc to automation, from naïve to well-defined processes that also gain a predictive dimension. Most retailers are said to be in the integration stage, where the company has enough data to make decisions based on market signals, but the vast majority are not leveraging what they have. Most companies offer three stages of business intelligence consulting, similar to Itransition: monitoring intelligence, analytical intelligence and predictive intelligence, the most important stage.

Business Intelligence is a general name for several applications that can help a company have an integrated overview of vendors, stocks, clients, payments, and marketing. This is a new approach, in contrast to the siloed way of working, specific to the pre-dot com era. A store is like a living organism. Treating each component separately prevents the organization from seeing the bigger picture and the cross-influences that could be used as profit centers.

What are the main BI applications?

Data can show a company how to size their stocks, price products based on demand and create promotions and sales targets to maximize revenue.

Costs, Prices, and Stock Management

Net profit in retail is small, therefore pricing the product to avoid losses and remain competitive is one of the greatest challenges. All costs of doing business should be taken into consideration, including unexpected situations. Decisions are based on scenario planning and making seasonally-influenced decisions.

Stock management is one of the costliest aspects of retail and BI solutions are striving to create the perfect model for optimization based on past purchases and future trends. A great app offers stock analysis, highlights the best-selling products and creates replenishment orders for these while simultaneously advising the managers to cancel orders for the worst-performing products.

Customer Analytics

Numbers help you get into the mindset of your clients, see their path from learning about your existence to becoming advocates of your quality. You need to understand the correlations between their demographic and sociographic characteristics and the content of their shopping carts. Pinpoint the link between the ads they see and the products they buy. Drill down to find out their whereabouts, payment methods, time spent in the brick and mortar store or the online retail environment. Put all this data together to create product bundles and promotions.

Vendor Management and Evaluation

Without a vendor evaluation, there is no business growth. You need to see per product results, per vendor analysis and take decisions accordingly. A BI solution should consider things like delivery time, client’s satisfaction with the return policy and even brand perception, if applicable.

Sales, Targets and Performance Analysis

In a small neighborhood store, you might want to know what the best-selling brands are and which of the sales assistants are generating more revenue, while in a multi-national corporation you may need to know which branches are meeting their quotas and which are falling short. In fact, these problems are primarily the same; only the scale is different. In this situation, BI is a great tool due to its scalability.

Additional drill-down levels can be added to an existing solution to get to the root of problems. The BI system can be the base of strategic decisions such as the product mix offered or the bonuses and promotions given to sales agents. The numbers from the dashboard also give a great estimate for setting attainable but motivating sales targets, based on the forecasts.

Trends, Forecasting, and Planning

Even retailers in the 1960s were looking at historical performance. The difference in the smart systems is that you no longer have to wait until the end of the month to do the math and see how the business is doing. A BI system performs in real-time and can dynamically adjust actions to maximize your profit. It’s like constant course correction when sailing to your destination, instead of waiting to see where the waves will take you.

Setting the Right KPIs in Digital Dashboards

Each organization has the possibility of setting their own KPIs, depending on the activity type, strategy and business proposal, but there are a few general guidelines that can be successfully used as defined by Supply Chain Operations Reference (SCOR) Level 1 metrics. These include:

  • reliability measured by order fulfillment and delivery performance;
  • responsiveness, usually expressed as time;
  • flexibility as a combination of vendor agility and product production
  • Costs
  • Asset management efficiency.

When it comes to client-related metrics, you can be inspired by the sales funnel approach to select the best metrics. These include passing from one stage to the other: measuring entry point leads, computing conversions rates, the price per conversion, the value of the conversion, the price of the average sale and the time spent in the funnel. Costs of recapturing missed opportunities through re-marketing are also necessary.

Depending on your business model, you are responsible for setting up KPIs to measure the performance of the sales agents.

Where is the retail industry heading?

The retail industry is a mature market, with low net profit rates (1.5%- 5%), where every process optimization and cost cut can mean the difference between survival and being out of business. Business Intelligence offers marketers, financial advisors, and strategists a starting point in their quest for a better understanding of clients’ needs and a better anticipation of the necessary steps to remain relevant.

The post Business Intelligence as a Competitive Advantage in the Retail Industry appeared first on Big Data Analytics News.



via Big Data Analytics News http://ift.tt/2vHLbHx

How Big Data Analytics Can Help Your Business

Big Data Predictions For 2015We all like to feel as if we have an intuitive sense of what our businesses need to succeed, but the reality is that successful companies rely on big data analytics to continuously understand, measure and improve. With powerful computing available via the cloud, and more tools and services for data collection and analysis than ever before, you can gain the edge over your competition, streamline your operations, connect with the right customers and even develop or refine the right products, using the unprecedented insights from big data. Refine your business strategy by integrating big data analytics into these five business areas:

Process improvement – work smarter

This is often one of the first areas businesses target and think of when it comes to big data analytics. Collecting data on your business process or production invariably exposes inefficiencies and opportunities. While there are often financial gains, resource use or reuse, scheduling and fulfillment/delivery are all areas that benefit from big data analysis.

Product improvement – create smarter

Beyond simply creating efficiencies in your process or production line, big data analytics used correctly will guide and refine your product or service offering. Use big data to gain insight into the market, trends and customer desires, refine your current offerings based on what best serves the company’s mission, direction and wellbeing, and test ideas early and often. This can be at the level of entirely new product lines, or it can be applied simply but effectively to invest in benefits and features of your product or service offering that bring value to your customer, while rolling back those features that bring the least value or don’t recoup their investment. As an added benefit, using big data early and often to develop and refine your product or service offering can help you identify market opportunities and promote the benefits of most interest to your customers, more coherently.

Customer acquisition – market smarter

Your marketing and advertising dollars are money down the drain if you’re not connecting with the right audience. Refine your customer acquisition process by testing every element, every step of the way. Test and drop campaigns, platforms, ads, text and photos that don’t perform well. With powerful analytics tools, you can track levels of engagement with calls to action.

For online properties such as your website, social channels and content marketing resources, dial down to granular elements like a catchphrase, Call to Action button or image. Stock photo websites, such as Dreamstime, can help resource you with high quality images to test for best audience response. Refine your images and messaging with big data analytics to reach the customers who are seeking your services, without wasting energy and resources on other audiences.

Customer retention – grow smarter

It costs five times more to gain a new customer than to retain an existing one. Slash your customer acquisition costs by using big data analytics to track what keeps your customers and what turns them away. Plan regular touch-points to seek out feedback from your customers and ask them what works, what doesn’t and what they really want. Incorporate personalization to create a feeling of connection, greater satisfaction and desire for your products or services – but be sure to include a human touch and don’t rely solely on data. Use analytics to drive direction, but people to build connection. Big data also helps when it comes to functionally achieving the improvements in service or product that your customers want.

Employee retention – collaborate smarter

The use of big data analytics when it comes to improving employee satisfaction can be of great help – but be cautious with overemphasizing data over relationships when it comes to employees or customers. While big data analytics have been touted as an excellent way to identify the right skills for the job, they run the risk of reducing employees to a list of traits and achievements, and alienating potential talent that needs a human point of contact. As in all areas of your business, use big data strategically to search for candidates, test skills and collect aggregate feedback. Support equality and fair workplace policy, but don’t use it as the sole tool in your kit. Incorporate human touch-points in the process to catch opportunities that a machine might miss.

Whatever the size or nature of your business, big data analytics will help you understand opportunities for improvement and to remain competitive. Employ big data strategically to improve your process and service or product, market it better, find and keep the best customers and get the right people on your team.

The post How Big Data Analytics Can Help Your Business appeared first on Big Data Analytics News.



via Big Data Analytics News http://ift.tt/2vLi0Dg

26/7/17

Top 10 Applications of Artificial Intelligence and Machine Learning You Should Know About!

The Silicon Valley reverberates of Machine Learning today as Artificial Intelligence (AI) continues to reshape, mold and revolutionize the world. Machine learning is a fragment of AI and a very significant one. It is a subset of AI that has proved to be successful for technology to make headway. Artificial intelligence is based on Computer Science, Biology, Psychology, Linguistics, Mathematics, and Engineering. It is the exploitation of computer functions that are related to human intelligence, such as reasoning, learning and problem-solving.

machine learning-ai

Where Artificial Intelligence aims to build machines that are capable of intelligent behavior Machine Learning is its application getting computers to work without being unequivocally programmed. It is based on algorithms that need not rely on rules-based programming. Machine learning makes it easier to develop smart and sophisticated software that decreases human effort and saves time. Years of work can be made a matter of minutes and seconds. It is an incredible breakthrough in the field of artificial intelligence.

Although machines have not yet completely taken over humans, they are slowly percolating into our lives and have managed to handle our monotonous, run-of-the-mill jobs as well as provided us with entertainment. Some wonder what will be the future of AI and machine learning, but they are oblivious to the fact that it’s not the future, it’s the present which would perpetuate and bring about more astounding inventions that would modernize and ease our lives. As we venture further into the digital age, our technology makes leaps and strides forward.

I have pooled up the top ten applications of the two which you should know about. You will be amazed when you will find out just many of those are incorporated into your daily life, and you use them!

ROOTING OUT FRAUD

fraud

You must be aware of the verification emails and messages you receive from your bank if money has been withdrawn. These are to avoid theft and fraud and save you from loss. AI is used here to monitor any act of fraud. It does so by being able to distinguish between fraudulent and non-fraudulent purchases for which it has been trained. The computer is fed large samples of fraudulent and non-fraudulent purchases and asked to figure out transactions that fall into each category.

EMAILS

The emails are winnowed as well. Gmail has successfully been able to filter 99.9% of the emails that are spam. The Spam filters must continuously learn from signals to catch message and message data to beat the spammers.

Gmail’s categorization of your emails is an attempt whereby it is taught to direct our emails in their respective sections according to how we prioritize them.

GAMING

AI has been in use since the first video game but no one was introduced to its complexities and effectiveness back then, and it has continued to evolve ever since bringing in changes one is bewildered at.

gamingAlthough video games are the most simplistic of AI’s use, the huge market and demand have resulted in massive investments and efforts.

SOCIAL NETWORKING SITES

Ever wondered at the appearance of suggestions that pop up for tagging every time you put up a picture on Facebook? That’s another of the fascinating aspect of AI-using the machine learning algorithm that mimics the structure of human brain powering facial recognition feature. Your newsfeed is customized according to your likes and ads that are of your interest are displayed.

Pinterest makes use of computer vision which has enabled the automatic identification of objects in images and then recommends similar images. Machine learning aids in spam prevention, searches and discovery, ad performance and email marketing.

Identifying the contextual meaning of emojis on Instagram and automatic suggestion of emojis and an array of filters on both Instagram and Snapchat are a result of AI application.

social network sites

VIRTUAL PERSONAL ASSISTANTS

Smartphones are equipped with a voice-to-text feature which converts your audio into text. Google uses artificial neural networks to power voice search. Amazon went a step ahead with its Alexa- a virtual assistant that lends a hand in creating a to-do list, order online items, set reminders and answer questions. Echo smart speakers can integrate Alexa into the comfort of your living room, and you can easily throw questions or order food.

SMART CARS

Imagine reading a newspaper or a novel while driving or enjoying a delicious, succulent meal. Yes, it is possible. Google’s self-driving car and Tesla’s ‘autopilot’ feature news are rampant. A new algorithm developed by Google allows self-driving cars to learn driving in a similar fashion as humans do, that is by experience!

smart cards

Although Tesla’s autopilot feature isn’t advanced yet, the cars have already hit the road signaling the breaking in of this technology.

ASSESSMENT

The nightmare of every student, the plagiarism checker is supported by machine learning. Machine learning is capable of detecting plagiarized text which is not even in the database, i.e., content in a foreign language or that which hasn’t been digitized. The similarity function is the algorithmic key which outputs a numeric estimate of how similar two documents are.

GRADING

To unburden some of the load Robo-readers help is required. Essay grading a laborious task, and it is, for this reason, The Graduate Record Exam (GRE), grades the essay using one human reader and one robot reader called e-Rater. If the results do not match, a third being is brought in to settle the difference. Pairing up of human intelligence and artificial intelligence improves the reliability of the result.

grading

SHOPPING ONLINE

While purchasing things online, you search for an item and can quickly see more of the similar, relevant searches emerge. It is the algorithm which automatically combines multiple fitting searches. Patterns are set that help in the adaptation to and recognition of the client’s needs.

The recommendations you receive about items you might be interested in which others have bought helps increase sales. The personalized recommendations on the homepage, at the bottom of the page and through emails are also artificially generated.

ONLINE CUSTOMER SUPPORT

Many sites give the opportunity to their customers to talk to the customer support service while browsing. But rarely do they provide with an actual human at the other end to walk you through the site or answer your queries. You are actually talking to a rudimentary AI. While some give you a little amount of information, others are capable of extracting the accurate, relevant information from the site.

These chatbots are unable to decipher the way humans communicate, but the rapid advances in natural language processing (NLP) has improved the situation.

WRAPPING IT UP

We have just scratched the surface of AI and machine learning, there is more to dig in and discover. Both AI and machine learning are intertwined and are continuing to touch our lives easing things up and bringing in development. The promising future of the two because of its quick earned popularity and use makes us anticipate of what will come next. Machine learning still has room for improvement because of which it is the buzzword nowadays. The prime focus of all the large technology companies lies on improving it.

The post is by Victoria Ashley, a professional content writer always seeking opportunities to write on. She is associated with a Trucks & Equipment business as a trainer and content analyst!

The post Top 10 Applications of Artificial Intelligence and Machine Learning You Should Know About! appeared first on Big Data Analytics News.



via Big Data Analytics News http://ift.tt/2tBGcqb

Core Differences between Artificial intelligence and Machine Learning

machine learningThe ubiquitous influence of Artificial Intelligence and Machine Learning is inescapable. The two terms although virtual and are mostly used interchangeably, are entirely different. Before I dig into the complexities of the two, the best way to give an outline of the two would be to say; Artificial Intelligence bodies the whole concept of machines being able to act intelligently and smartly. Whereas, Machine learning is an approach of Artificial Intelligence or a figment which emphasizes on providing data to the computers which it will analyze and then come up with the best possible solution on its own.

THE JOURNEY OF PROGRESS

The research labs have long been simmering with Artificial Intelligence, and it has been in use for a very long time. For decades now, as the mind has progressed and our understanding has improved, new locks to Artificial Intelligence have been opened, and there is yet more to achieve.

Machine learning stems from the minds of early AI crowd. Artificial Intelligence goes back to the time when Aristotle introduced syllogism. Scientists from Mathematics, Engineering, Psychology, Economics and Political Science opined of creating an artificial brain. In 1930’s and 40’s, Alan Turing, a pioneer in computing formulated techniques that set the ground for Artificial Intelligence today. That is how the magic wand came into being.

The world today is engulfed in Artificial Intelligence. Google, Netflix, Facebook all have revamped themselves with the aid of AI. As something with greater depth and broad scope, of which Machine Learning is just a part, the confusion should be cleared. That is what I am about to do here so that by the time you finish reading, you are able to distinguish between the two.

ARTIFICIAL INTELLIGENCE (AI)

artificial intelligenceIt might not be wrong to interpret Artificial Intelligence as the fusion of humans and machines. AI is an umbrella term, to put it simply, making computers act smart. It is among the major fields of Computer Science that cover robotics, machine learning, expert systems, general intelligence and natural language processing.

Artificial Intelligence can be listed as Applied or General. The more common one is Applied Artificial Intelligence which is used in the designing of systems that cleverly trade stocks and shares. On the other hand, the Generalized AI can handle any task and is the steamy one where advances are being made.

At its infancy, AI has helped in dealing with the mundane daily household chores, such as Vacuum Cleaners, Dishwashers, and Lawn Mowers. The security systems have been developed using AI. The national security system uses data on AI systems, which then presents accurate problems that the nation might face. Crime can be controlled and fought by building criminal profiles.

Artificial intelligence has made its mark in education and learning too. Personalization of tutoring to monitor study pattern of students is achieved by AI. The disabled and elderly have also benefitted from it. Robots have been assisting people from a very long time now. It is used in speech therapy by using voice recognition systems. Artificial Intelligence has even shown its potential in transport. The software programmed cars that have recently been launched would reduce the risk of accidents and traffic jams. The metros and driver-less trains are pretty old now and have proved to be convenient.

AI propelled the development of Machine Learning and has paved the way for further progress.

MACHINE LEARNING

To best describe Machine Learning you can conclude by saying that it’s a way to achieve Artificial Intelligence.

The prime focus of AI was to establish if-then rules to mimic human knowledge and decision-making. The Expert systems were unable to exploit data and learn from it, which posed a barrier to advancement. They remained within the boundaries laid by programming and cerebral capacity and failed to go beyond. Machine Learning won in replacing the Expert System and breaking through the posed barriers. Machine Learning focuses on constructing algorithms that would learn from data, complete task and make predictions with high statistical accuracy. It is not used in the perusal of the data which is a major factor.

The emergence of the internet and with it a significant amount of data that was generated, stored and needed to be analyzed. The engineers came to a conclusion that instead of teaching computers how to act it would be more efficient and convenient to code them to think like humans. Plugging them into the internet would then open up the passage to the world for them.

NEURAL NETWORK

Neural Network was the key fashioned to teach the computers to think and understand the world with human perspective but with the speed and accuracy machines are known for. Reading texts and deciding whether it’s a compliment or a complaint, finding out how the genre of music would affect the mood of the listener or composing themes of its own are offered by systems working around Machine Learning and Neural Networks. The idea of communicating with the electronic devices and digital information has also been implanted by science fiction. This has lead to the innovative prospect of Natural Language Processing (NLP), on which work begun and still is being done. With the help of NLP applications, the machines make an attempt to understand human language and then reply accordingly. Machine Learning is used to help the machines to adapt to the nuances of human language and to be able to respond to a particular audience.

The iterative aspect of machine learning has made it possible for models to be exposed to information and then act independently after adapting to its cadences. The past computations help in generating reliable results. The capacity to automatically apply the complex mathematical calculation to Big Data at a faster pace with every passing day is a recent development. The Self-driving cars, online recommendations, customer feedback, and fraud detection are some of the common applications.

The most significant advent is the image recognition. The algorithms are capable of drawing out results from among thousand and million of images, which has impacted the geospatial industry greatly. Image recognition is though just one area from the many Machine Learning has been able to hit.

Analyzation of bigger and more complex data, at the speed of light and with precision, even on a very larger scale seems fruitful and appealing to the industries and business organizations.

CONCLUSION

Artificial Intelligence and specifically Machine Learning has a lot in store. It offers the mechanization of the mundane, monotonous tasks as well as promises insight into industrial sector, business sector, and healthcare sector.

Again both Artificial Intelligence and Machine Learning are entirely different; both are being consistently and lucratively sold. Since Artificial Intelligence has always been there, it is seen as something old with the new word Machine Learning taking its place. But then the surface of Artificial Intelligence still needs to be scoured; there is so much more that still needs to come out and revolutionize the human civilization.

Certainly, we are on track to reach that goal and getting nearer with increasing speed. It owes to the light in which we have begun to see Artificial Intelligence with the help of Machine Learning!

The post is by Victoria Ashley, a professional content writer always seeking opportunities to write on. She is associated with a Trucks & Equipment business as a trainer and content analyst!

The post Core Differences between Artificial intelligence and Machine Learning appeared first on Big Data Analytics News.



via Big Data Analytics News http://ift.tt/2upgpo4

25/7/17

Is Machine Learning Really The Future? Everything You Should Know About!

machine learningThe wave of Machine Learning has hit and transformed every sector, affecting the way we take our decisions. The widespread use of Big Data among all the industries has sparked the use of machines to detect patterns and previse future. With multiple complicated territories which Machine Learning has been able to conquer such as data mining, natural language processing (NLP), image recognition, and expert systems, it is said to be the foundation of future civilization.

Machine Learning is a very promising approach of Artificial Intelligence, one which is radically reshaping the present and the future!

 

MACHINE LEARNING AND ITS WORKING

Machine learning is actually a bottomless pit. It encompasses a lot of things.

The assumption laying the ground for Machine Learning is the analytical solutions that are reached by studying previous data models. It is the process whereby Artificial Intelligence is developed in computers to make them work without being programmed and as efficiently as a human mind but with little or no effort. It makes use of statistical analysis and predictive analysis to carry out the assigned task.

HOW IT WORKS

Three types of techniques are employed by Machine Learning, which trains and predicts the output.

  1. SUPERVISED MACHINE LEARNING- Input (X) and output (Y) variables are fed to the supervised algorithm which then puts them to use by mapping inputs to the desired outputs. It is called supervised machine learning since it requires human interference for making predictions in the testing data. A number of iterations help in getting the acceptable level of output. Common applications of this procedure are:
  • Linear or Logistic regression
  • Naive Bayes
  • Support Vector Machines
  • Discriminant Analysis
  • Random Forest
  • K-Nearest Neighbors
  1. UNSUPERVISED MACHINE LEARNING- There is no outcome variable in unsupervised machine learning. The algorithm works out the data using input variable and comes up with a similar structure. It forms classifications and associations in data. The fields where it comes in use are:
  • Gaussian Mixture
  • Neural Networks
  • K-means and Hierarchial clustering
  • Apriori algorithm for association rule mining
  1. REINFORCEMENT LEARNING- Here the machine generates programs, called agents using the process of learning and evolving. The conclusion is drawn from previous results and repercussions and the best method is selected through trial and error.

APPLICATIONS OF MACHINE LEARNING

uses of mlWith all the intensifying hype around Machine Learning, the technology companies are under pressure to exploit it further, and soon, to come up with more if its features and ways in which it can be utilized.

Nevertheless, you will be shocked to discover the diversification of machine learning as well as how much you are already making use of it unknowingly!

Machine learning has proved worthy in many industries globally. Some of the staunch users of ML are:

FRAUD DETECTION

Machine Learning is a pro at detecting any anomaly. It can flag any malpractice and malfunction in high volume and high-frequency data transfer and communication. Inside trading in stock markets and fraudulent transactions are quickly and efficaciously caught.

PERCEPTIVE USES

Whether it be a language barrier or a matter of text-to-speech and vice verse, all make use of ML.

Visual recognition, Tone Analyzer, chat box, retrieving and ranking the relevant information, and personality insights have been using Machine Learning.

MEDICAL

Healthcare organizations make use of Machine Learning. It picks out similar patterns between the patients and diseases. The biometric sensors have been saving lives globally. To determine the effectiveness of treatment in clinical treatments machine learning is employed.

FINANCIAL SERVICES

Fraud detection and face recognition first began in the financial sector to catch theft. Since then through further improvisation and method, it has been meticulously working through structured and unstructured data.

In businesses better forecasting plays an intricate and crucial role. The constant and irregular fluctuation makes it difficult to comprehend the demand variability. But now, Machine Learning provides a solution for demand forecasting.

RETAIL

The recommendation and suggestions, market analysis, customer sentiments analyzation, ad ratings, and identification of new markets, Machine Learning has helped the retailers increase their sales and grow their business.

FORECASTS FOR MACHINE LEARNING

In the near future, the world is about to witness tremendous growth in Smart Apps, Virtual Assistants, and substantial use of Artificial Intelligence. The mobile market will escalate by the use of machine learning, and we will soon enter the era of self-driven cars (they have already been launched for testing and trials). Machine Learning is already an incredibly powerful tool which has been solving complicated problems. Although new Machine Learning tools would pop up now and then, the skills required to tune them and jazz them up would forever be in demand.

MACHINE LEARNING A CONTROVERSY?

The disadvantages that would come along with the evolution of Machine Learning are:

  • An overwhelmingly automated lifestyle would make the human race vulnerable to threats and misfortunes!
  • Things would be robbed of genuineness, and the look and feel of originality would be replaced by fakeness.
  • The human resolve and interdependency would be eliminated which is the core of human civilization. Are we prepared to lose ourselves completely? Is it worth being a technology servant?

SCOPE OF MACHINE LEARNING

Regarding job opportunities Machine Learning has a significant role to play, there is no aspect of life where Machine Learning has not left its mark.

As the amount of data proliferates, the need for Engineers and Scientists has increased and will continue to grow. In order to understand and manage the subtleties and pitfalls of Machine Learning, workforce would be required because what seems as well tuned, simple machine is capable of leading you astray from your desired outcomes. Companies and industries heavily rely on Machine Learning, and so you have a great opportunity in the field.  The demand for Machine Learning Engineers would continue to grow, and you can get in on the action!

CONCLUSION

Machine Learning is a harbinger of potential growth of humans and economy. So far now we have just removed the veneer from the surface. There is much more that Machine Learning has yet to achieve and introduce. There is hardly any application for which Machine Learning can not be used for detection and prediction.

Despite the contradictions in views, it is assured that in future, the gap between demand-supply in Data Science and Machine Learning Skills could only be bridged by providing the workforce that can handle Machine Learning’s intricacies, given the benefits of Machine Learning. The businesses would plunge into tap algorithm models that would improve and enhance their operations and customer-facing functions. The algorithms would take the business to whole new levels.

We have already seen how technology has replaced humans in financial market and many other areas for the better, taking off the load of cumbersome and labor-intensive work from human shoulders. It, therefore, wouldn’t be wrong to say that Machine Learning has a bright future ahead that would help humans enter a new modified era. Over time more dramatic evolution would bring in more positive changes.

Do not fear to lose your job! Machine Learning would just change the way you work, and yes it would be more enjoyable and less tiring!

The post is by Victoria Ashley, a professional content writer always seeking opportunities to write on. She is associated with a Trucks & Equipment business as a trainer and content analyst!

The post Is Machine Learning Really The Future? Everything You Should Know About! appeared first on Big Data Analytics News.



via Big Data Analytics News http://ift.tt/2uTiuK7

21/7/17

Big Data is Transforming the Travel Industry

Big data is transforming the way businesses conduct operations. Data is gathered in many ways through online searches, analysis of consumer buying behavior and more, and companies use this data to improve their profit margin and provide an overall better experience to customers. While big data is used in many industries around the globe, the travel industry stands to gain a tremendous amount from its use. Many larger companies are already using big data creatively, but you may not understand the true value it can provide for your business. With a closer look at how big data is transforming the travel industry, you can better determine how your own business can benefit from its use.

Greater Personalization

The travel industry includes a wide range of businesses, such as rental car companies, hotels, airlines, tour operators, cruise lines and more. Each of these companies must find a way to improve the overall customer experience and to meet the unique needs of each customer, and big data assists with this process. Through the use and analysis of big data, travel industry companies can learn more about the preferences of smaller segments of their target audience or even about individuals in some cases. This gives them the ability to tailor special promotions, deals, experiences and more specifically to them. For example, a Las Vegas resort may determine that its customer base is largely comprised of younger adults, so it may host a popular national hip hop star’s concert in its auditorium to attract more visitors. Improving the customer experience through personalization may also enable travel businesses to generate repeat business through loyalty and to get more word-of-mouth referrals.

Unique Differentiation

Big data analysis also gives travel businesses the opportunity to determine more easily why their customers are choosing them over the competition and vice versa. It is necessary to stand out in a crowded marketplace, and businesses that understand why customers are choosing their business over the competition can tailor marketing and products specifically to that niche. Unique differentiation can be used to improve branding, make marketing more cost-effective and even design new products or promotions that appeal specifically to consumers based on why they are choosing to work with a specific company.

Improvement of Business Operations

Travel industry businesses may also use big data analysis to improve operations in numerous ways. Some data analysis may reveal, for example, that one specific aspect of marketing is ineffective, and the company can alter marketing efforts to generate a greater return on investment. Another company may learn from big data that the customers are choosing the competition more heavily because of special price promotions or a perception of better quality. These are only a few of the ways that big data can be revealing, and proper analysis of it can help travel businesses to improve operations for enhanced success and improved profitability going forward.

Real-Time Travel Assistance

Big data captured through mobile devices can provide travel businesses with insight about their current locations as they travel around the world. Some travel companies are harnessing this real-time data to provide travel assistance and recommendations. For example, if a travel app determines that your smartphone is located next to a popular theme park, restaurant or other attraction, it may send your special offers or deals that you can use to save money on a visit to these places. Some also use helpful travel tips or links to local services that you may find helpful.

The Ability to Meet Future Needs

Airlines and cruise lines are just a few of the travel companies that need to know where customers are interested in visiting so that they can customize future travel options available through their companies. For example, one cruise line may see that there is an increased interest in travelers wishing to stop in Costa Rica or Cuba through their use of big data analysis. An airline company may determine that they need a direct flight from Houston to Phoenix several times per week to meet customers’ needs. Perhaps there is an increased interest in people looking for the best places to live in Michigan as a vacation spot, and local businesses can benefit by appealing to these potential long-term vacationers. By analyzing big data, travel companies can better determine how to allocate resources going forward in the most cost-effective way.

Some consumers are understandably timid and even alarmed by how big data is being used to gather information about them. They may view it as an invasion of privacy, for example. However, many travel industry companies successfully use the data that they gather through legitimate means to improve the customer experience in various ways, all parties can benefit. If you work in the travel industry, analyze the big data stats that you may have access to today to determine how you can use the information in positive, productive ways.

Contributed by: Rick Delgado. He’s been blessed to have a successful career and has recently taken a step back to pursue his passion of writing. Rick loves to write about new technologies and how it can help us and our planet.

 

 



via insideBIGDATA http://ift.tt/2s9tnTo

Peering into Neural Networks

Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today’s best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars. But neural nets are black boxes. Once they’ve been trained, even their designers rarely have any idea what they’re doing — what data elements they’re processing and how.

Two years ago, a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon’s Mechanical Turk crowdsourcing service.

At this year’s Computer Vision and Pattern Recognition conference, CSAIL researchers will present a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.

The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography algorithms, but could also provide some evidence about the organization of the human brain.

Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing “nodes.” Like neurons, a neural net’s nodes receive information signals from their neighbors and then either “fire” — emitting their own signals — or don’t. And as with neurons, the strength of a node’s firing response can vary.

In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.

In the earlier paper, the researchers sent the images to workers recruited through Mechanical Turk, who were asked to identify what the images had in common. In the new paper, they use a computer system instead.

We catalogued 1,100 visual concepts — things like the color green, or a swirly texture, or wood material, or a human face, or a bicycle wheel, or a snowy mountaintop,” says David Bau, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We drew on several data sets that other people had developed, and merged them into a broadly and densely labeled data set of visual concepts. It’s got many, many labels, and for each label we know which pixels in which image correspond to that label.”

The paper’s other authors are Bolei Zhou, co-first author and fellow graduate student; Antonio Torralba, MIT professor of electrical engineering and computer science; Aude Oliva, CSAIL principal research scientist; and Aditya Khosla, who earned his PhD as a member of Torralba’s group and is now the chief technology officer of the medical-computing company PathAI.

The researchers also knew which pixels of which images corresponded to a given network node’s strongest responses. Today’s neural nets are organized into layers. Data are fed into the lowest layer, which processes them and passes them to the next layer, and so on. With visual data, the input images are broken into small chunks, and each chunk is fed to a separate input node.

For every strong response from a high-level node in one of their networks, the researchers could trace back the firing patterns that led to it, and thus identify the specific image pixels it was responding to. Because their system could frequently identify labels that corresponded to the precise pixel clusters that provoked a strong response from a given node, it could characterize the node’s behavior with great specificity.

The researchers organized the visual concepts in their database into a hierarchy. Each level of the hierarchy incorporates concepts from the level below, beginning with colors and working upward through textures, materials, parts, objects, and scenes. Typically, lower layers of a neural network would fire in response to simpler visual properties — such as colors and textures — and higher layers would fire in response to more complex properties.

But the hierarchy also allowed the researchers to quantify the emphasis that networks trained to perform different tasks placed on different visual properties. For instance, a network trained to colorize black-and-white images devoted a large majority of its nodes to recognizing textures. Another network, when trained to track objects across several frames of video, devoted a higher percentage of its nodes to scene recognition than it did when trained to recognize scenes; in that case, many of its nodes were in fact dedicated to object detection.

One of the researchers’ experiments could conceivably shed light on a vexed question in neuroscience. Research involving human subjects with electrodes implanted in their brains to control severe neurological disorders has seemed to suggest that individual neurons in the brain fire in response to specific visual stimuli. This hypothesis, originally called the grandmother-neuron hypothesis, is more familiar to a recent generation of neuroscientists as the Jennifer-Aniston-neuron hypothesis, after the discovery that several neurological patients had neurons that appeared to respond only to depictions of particular Hollywood celebrities.

Many neuroscientists dispute this interpretation. They argue that shifting constellations of neurons, rather than individual neurons, anchor sensory discriminations in the brain. Thus, the so-called Jennifer Aniston neuron is merely one of many neurons that collectively fire in response to images of Jennifer Aniston. And it’s probably part of many other constellations that fire in response to stimuli that haven’t been tested yet.

Because their new analytic technique is fully automated, the MIT researchers were able to test whether something similar takes place in a neural network trained to recognize visual scenes. In addition to identifying individual network nodes that were tuned to particular visual concepts, they also considered randomly selected combinations of nodes. Combinations of nodes, however, picked out far fewer visual concepts than individual nodes did — roughly 80 percent fewer.

To my eye, this is suggesting that neural networks are actually trying to approximate getting a grandmother neuron,” Bau says. “They’re not trying to just smear the idea of grandmother all over the place. They’re trying to assign it to a neuron. It’s this interesting hint of this structure that most people don’t believe is that simple.”

 

Sign up for the free insideBIGDATA newsletter.

 



via insideBIGDATA http://ift.tt/2tEiXPi

19/7/17

Case Study: More Efficient Numerical Simulation in Astrophysics

Novosibirsk State University is one of the major research and educational centers in Russia and one of the largest universities in Siberia. When researchers at the University were looking to develop and optimize a software tool for numerical simulation of magnetohydrodynamics (MHD) problems with hydrogen ionization —part of an astrophysical objects simulation (AstroPhi) project—they needed to optimize the tool’s performance on Intel® Xeon Phi™ processor-based hardware.

via insideBIGDATA http://ift.tt/2uapUHU

Digital Transformation Starts with Customer Experience

I attended the interview with Nick Drake, Senior Vice President, Direct to Consumer at T-Mobile and Otto Rosenberger who serves as CMO at the Hostelworld Group at the Adobe Summit. The key take away of the entire session was that customer experience is the beginning and the core of digital transformations – it is where it all begins.

T-Mobile and Hostelworld are completely different companies, but what kind of connects them is the fact that they both focused on customer experience when transforming their companies.

So why is customer experience the key to it all? Because it links organizations to customers at an emotional and physiological level.

The story of Hostelworld

Hostelworld is now a leading hostel booking platform. Three years ago, it was set up as just a booking engine, as a transactional business. Today, the company accompanies their customers throughout the entire trip. Hostelworld operates globally, with most of the customers based in North America, whereas 30% to 35% come from Europe. What fueled their growth? They went beyond booking, and helped their customers out in each and every way so that they get the best offers around, and can enjoy invitations and group tours during the trip. Almost 50% of their bookers use the app when they are travelling, and 90% of these people say the app made their trip so much more fun.

So what did Hostelworld do differently? They tapped into the emotions of their customers, and offered them the experiences they were looking for. Yes, a lot of internal changes were required, but it was worth it. They had to work on their business goals, operating principles, and the team they had. Additionally, they had to divide the budget appropriately between marketing and tech.

T-Mobile’s Journey

Drake shared the journey of T-Mobile. When Drake joined T-Mobile, the company was doing well in terms of customer acquisition, but they weren’t living up to its potential. Only 35% of all acquisitions were made on the digital channel, and so Drake’s task was to raise the bar.

T-Mobile had to radically transform their business, giving the IT team enough breathing space to platform their legacy. They decided to go forward with multi technology platforms, taking a radically different approach to customer experience, but they had to bring about a lot of changes. They understood their audience, and figured out ways to interact with them over various channels, while reinventing and customizing their product offerings.

T-Mobile has seen surprising results, and doubled their subscriber base since they stepped into the race. They completely redesigned themselves using the Adobe Marketing Cloud. Using personalized content, they reduced clicks by 60%, and drove higher engagement levels. From a technical perspective, they redeveloped their mobile app in order to provide a better service. A new feature called asynchronous messaging was introduced, which allows users to strike up a conversation with customer services.

Drake advised that it is important to think about what kind of business you are in, and then invest in both the current day and future. Plus, there should be a balance between the commitment made to the shareholders and then ensuring that commitments RE met for the next few years.

So what does this boil down to in the end?

Experiences impact the way in which people feel and respond. But businesses must provide rich and immersive experiences that go deeper than redesigning and managing interactions. Experience is more about building, and then nurturing an emotional connection with your audience – so that they completely connect to your brand.

Your business may not have begun to transform digitally, but sooner or later, you’ll have to take the step. And if you don’t, you’ll be eaten up by the competition- that is what it gets down to.

Stay updated with what’s going in the digital world with Ronald van Loon, Top Ten Global Influencer for Big Data, the Internet of Things (IoT), and Predictive Analytics on LinkedIn and Twitter.

Ronald

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

More Posts - Website

Follow Me:
TwitterLinkedIn

Author information

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

The post Digital Transformation Starts with Customer Experience appeared first on Ronald van Loons.



via Planet big data http://ift.tt/2rMnjiJ

What are Digital Twins ?

Digital Transformation has brought in all the new concepts and technologies at the hands of consumers and businesses alike. Digital Twin is one of them. It is a virtual image of your machine or...

...


via Planet big data http://ift.tt/2s5q6Zz

Virtual and Augmented Reality: The Future of Big Data Visualization?

big data in travelFor years, the necessity to acquire large amounts of digital data has surpassed the somewhat blatant disorganization of said gathered data. These massive compilations of data, coined “big data,” may be the key to business strategy, improvement, and pattern analysis. However, big data is all too often gathered inconsistently and can often be difficult to display to the user in an effective manner.

Although there have been multiple improvements to both the collection and visualization of big data in the last few years alone, including the implementation of 3-D spatiotemporal interactive data visualization software, some believe this data visualization can even go one step further and delve into the vast world of virtual and augmented reality. However, others believe that this format could prove to be even more difficult than the basic big data visualizations of today and would ultimately be a step in the wrong direction when it comes to improving the way we currently look at big data.

Therefore, in order to fully understand how virtual and augmented reality could be utilized in big data visualization, we must first stop and ask ourselves “how the virtual reality visualization of big data could be accomplished, who is currently at the forefront of this future interface software, and what are the pros and cons to the utilization of said virtual and augmented reality when analyzing big data? Once we have answered these questions, we will finally have a deeper understanding of how this software would work, how it could improve modern big data analysis, and what to be watching for in the next few years regarding its slow rise in popularity.

How Can Virtual and Augmented Reality Be Used To Visualize Big Data?

In 2016 alone, over 6.3 million VR headsets were shipped worldwide. Although the original primary use for said headsets was in the gaming industry, this did not last for long, as people began to recognize the immense potential said headsets had and how they could be utilized in a professional setting. In fact, one of the first uses for the virtual reality headset in the business industry came in the form of virtual modeling. Through virtual model visualization, designers were able to actually interact with their models and find flaws in a virtual setting without wasting the materials that traditional model building did in the process.

Since then, multiple companies have sought to manipulate our senses and utilize olfaction, sonification, and vibration in order to create a more realistic virtual world. With the incorporation of motion trackers such as the Vive tracker alongside haptic glove technology, companies are now able to not only visualize models but touch them with their hands which opens up a whole new level of opportunity in the virtual visualization market. For big data visualization, this meant that a future was probable in the virtual and augmented reality world.

In order to recognize the various benefits to implementing virtual and augmented reality interfaces in the visualization of data, we must first take a look at the data visualization tools currently on the market and the limited options they provide. During the October, 2016 Data Science Meetup, a representative from the revolutionary VR/AR company, Meta Augmented Reality, talked about just that. According to Meta, “Existing tools leave a little to be desired.”

Meta talked directly about some of the setbacks current big data visualization tools have and how augmented reality can tackle these issues one by one. For starters, the current tools on the market are often limited in size by media. This means that you either are forced to drill down data sets or crop them, which can be frustrating and relies on you being able to retain all of this information in your head as you view it. Furthermore, if you don’t do this, it will overload the working memory which makes the process far more tedious and tiresome.

However, perhaps one of the most ineffective aspects of current data visualization software comes in the form of the 3-D objects it deals with. Although it is able to principally create the illusion of three-dimensional objects, this is solely through the use of monocular depth cues, which trick your brain into believing certain items are closer than others, thus creating a 3-D environment. Unfortunately, the human mind may be tricked temporarily, but these 3-D depth cues are not very accurate, and they therefore often require workarounds.

After reviewing this data, Meta recognized the flaws in traditional data visualization interfaces and set out to build an augmented reality headset that utilized principles in neuroscience and vision science to create the most natural interface possible with their computers. Although this is merely one example, multiple companies have turned towards this initiative in order to create far more effective and interactive data visualization systems, and the progress even now is immense.

Who Is Leading This Big Data Visualization Revolution?

Recently, EU researchers at CEEDS (Collective Experience of Empathic Data Systems) successfully transposed big data into an immersive, interactive virtual environment at Pompeu Fabra University in Barcelona. Known as the eXperience Induction Machine, this device allows users to step into large data sets in order to visualize data in a more “empathic” way, which will allow large data sets to be understood far more easily by individuals due to the way the brain functions with numbers and visualization.

However, perhaps one of the most impressive incorporations of big data analytics into a virtual reality environment comes from the company Virtualitics. Founded in 2016, the company initially made $4.4 million in funding, and for good reason. Unlike many data visualization tools both from the past and in the VR sector, Virtualitics data visualization interface also incorporates AI to formulate a system that uses smart mapping, smart routines, machine learning, and natural language processing to identify important patterns and display them in the virtual environment, which can then be customised by users. If this wasn’t already impressive enough, the user can examine up to 10 dimensions of data at a time, making it one of the most effective and intelligent designs created thus far. However, even with this said, this new technology still has a ways to go and so many opportunities for improvement and advancement through the incorporation of other forms of modern technology.

For instance, when virtual reality model creation began to take hold, students and researchers at the University of Illinois developed perhaps one of the most fully immersive model visualization systems currently on the market known as CAVE, or Cave Automatic Virtual Environment. The system puts users in a completely immersive virtual world that is scalable and easy to manipulate, which means that the objects within it can ultimately be visualized in a far more effective manner than a regular 3-D virtual model.

Furthermore, with the incorporation of tracking devices such as the HTC Vive Tracker, which became available to developers in March of this year, as well as haptic gloves such as Neurodigital’s GloveOne, which is fully compatible with the HTC Vive Tracker, the possibility to not only click on particular data points and enlarge them but to also touch these objects and feel them in your hands is truthfully only a few steps away from being the next giant leap for big data visualization.

With this technology in place, it could very soon be possible to also create interoperable data repositories for Collaborative Work Environments (CWEs), which would allow whole departments or teams to collaborate in virtual and augmented environments on projects as a team.

A good example of this can be seen through software developer Drew Gottlieb’s project he dubbed “mixed reality” in which he was able to use virtual reality headsets and transfer virtual 3-D objects into an augmented reality setting. Through this, Gottlieb was able to interact with simple blocks he created in an office space alongside his co-worker. By watching this, we can see how this kind of data visualization and collaboration could be a great way for teams to not only gain a better understanding of the big data they have collected but determine ways to use this data to their benefit in a fully interactive and team-oriented manner.

Lastly, another large advancement in virtual reality digital visualization comes from the company VMware which is a subsidiary of Dell Technologies and specializes in cloud computing as well as virtual data visualization. One of the most established virtual data visualization companies on the market, they may not be rushing to advance their technology, but this technology does happen to be stable and reliable, which is something multiple other sources simply are not at this point in time. However, with any new tech, there are always things that can be improved, and it is important that we recognize these improvements and seek to make them as often as possible so as to grow and expand this otherwise highly promising market.

What Are the Negative Aspects of Virtual and Augmented Reality Data Visualization?

There are two main issues regarding virtual reality big data visualization that have yet to be resolved. However, these two main factors are not only common concerns in the tech industry but also certainly capable of being fixed. The first being that virtual reality headsets still do not have the kind of security settings that various other forms of technology do — due in part to the initial demand for these headsets, which led to certain security standards being ignored in order to get the production ball rolling.

On the subject of supply and demand leading to security issues, Ben Smith, CEO of Laduma, stated, “As new developments are rushed to market in order to gain a lead on competitors, there is a risk that mistakes are being made.” The truth is that devices which are connected to the cloud as well as various other weak devices which also connect to personal information can be highly dangerous. Furthermore, once this weak software becomes an integral part of a company’s analytics process, this means that these weak devices will not only be connected to the analytics of the entire company but various other pieces of tech with personal information within them. With this said, these devices can quickly become dangerous and lead to major malware and cyber attacks directed at your company and its sensitive data.

A perfect example of this can be seen during the major Mirai Malware attack of 2016 that was the largest of its kind. This DDoS attack on DNS service provider Dyn led to multiple major companies and their products being inaccessible for nearly an entire day. Although this attack may sound as though it was derived from some serious hacking and cleverness, the truth is that it all began with a series of weak cell phones and cameras connected to the cloud. Once one cell phone was infected, when the owner used calling connected to his Wi-Fi and overall cloud network, it led to more devices being infected with the malware botnets. After time, the infection had spread to the source, Dyn, and the hacker was able to manipulate the botnets into doing as they wished.

However, as new versions of the original VR headsets hit market and patches to help fix security settings do as well, it is highly unlikely that these issues will remain for much longer. With secure headsets and devices that are equally secure connected to the cloud, companies can rest assured knowing that one of the two main issues has been resolved, and their analytics will be all the more secure because of it.

As for the second main issue with virtual reality big data visualization, there is a slight concern arising that these devices could potentially lead to data loss if the interface were to overload with too many data sets at once. The data loss itself may not be that big of an issue due to data recovery services, but this is actually where the issue lies. Although there are a few quality virtual data recovery companies out there that work with systems like VMware, such as Kroll Ontrack, the list of these companies is fairly short and few. Furthermore, it is not recommended that you try to have data recovered in a virtual interface if the company is not qualified to do so. However, with the popularity of this form of big data visualization rising daily, it is certainly possible that more data recovery services will begin to accept VR data interfaces and that the interface itself will become more capable and less likely to suffer data loss.

With this said, this is merely a minor issue to consider as the wonderful possibilities of virtual reality data visualization unfurl and nothing to be seriously concerned about in the long run. In fact, with so much opportunity on the horizon and a chance to finally visualize big data in a fully interactive and engaging manner, there is very little to be concerned about at all as you, your team, and your company will soon be able to take your analytics process to a whole new level and reap the myriad lucrative and progressive benefits because of it.

The post Virtual and Augmented Reality: The Future of Big Data Visualization? appeared first on Big Data Analytics News.



via Big Data Analytics News http://ift.tt/2uHEjMQ

18/7/17

AI – The Present in the Making

I attended the Huawei European Innovation Day recently, and was enthralled by how the new technology is giving rise to industrial revolutions. These revolutions are what will eventually unlock the development potential around the world. It is important to leverage the emerging technologies, since they are the resources which will lead us to innovation and progress. Huawei is innovative in its partnerships and collaboration to define the future, and the event was a huge success.

For many people, the concept of Artificial Intelligence (AI) is a thing of the future. It is the technology that has yet to be introduced. But Professor Jon Oberlander disagrees. He was quick to point out that AI is not in the future, it is now in the making. He began by mentioning Alexa, Amazon’s star product. It’s an artificial intelligent personal assistant, which was made popular by Amazon Echo devices. With a plethora of functions, Alexa quickly gained much popularity and fame. It is used for home automation, music streaming, sports updates, messaging and email, and even to order food.

With all these skills, Alexa is still in the stages of being updated as more features and functions are added to the already long list. This innovation has certainly changed the perspective of AI being a technology of the future. Al is the past, the present, and the future.

Valkyrie is another example of how AI exists in the present. There are only a handful of these in the world, and one of them is owned by NASA. They are a platform for establishing human-robot interaction, and were built in 2013 by a Johnson Space Center (JSC) Engineering directorate. This humanoid robot is designed to be able to work in damaged and degraded environments.

The previous two were a bit too obvious. Let’s take it a notch higher.

The next thing on Professor Jon Oberlander’s list was labeling images on search engines. For example, if we searched for an image of a dog, the search engine is going to show all the images that contain a dog, even if it’s not a focal point. The connected component labeling is used in computer vision, and is another great example of how AI is developing in present times.

Over the years, machine translation has also gained popularity as numerous people around the world rely on these translators. Over the past year, there has been a massive leap forward in the quality of machine translations. There has definitely been a dramatic increase in the quality as algorithms are revised and new technology is incorporated to enhance the service.

To start with a guess, and end up close to the truth. That’s the basic ideology behind Bayes Rule, a law of conditional probability.

But how did we get here? All these great inventions and innovations have played a major role in making AI a possibility in the present. And these four steps led us to this technological triumph;

  • Starting
  • Coding
  • Learning
  • Networking

Now that we are here, where would this path take us? It has been a great journey so far, and it’s bound to get more exciting in the future. The only way we can eventually end up fulfilling our goals is through;

  • Application
  • Specialization
  • Hybridization
  • Explanation

With extensive learning systems, it has become imperative to devise fast changing technologies, which will in turn facilitate the spread of AI across the world. With technologies such as deep fine-grained classifier and the Internet of Things, AI is readily gaining coverage. And this is all due to Thomas Bayes, who laid the foundations of intellectual technology.

If you would like to read more from Ronald van Loon on the possibilities of AI, please click “Follow” and connect with him on LinkedIn and Twitter.

Ronald

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

More Posts - Website

Follow Me:
TwitterLinkedIn

Author information

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

The post AI – The Present in the Making appeared first on Ronald van Loons.



via Planet big data http://ift.tt/2thT8oT