Resources

FILTERS

Resources

Blog Post 4 min read

Ensuring Data Quality is a Big Challenge for Business

In 1998, NASA lost their $125 million Mars Climate Orbiter when the spacecraft burned up in the Martian atmosphere. While the engineering and software were meticulously built to NASA’s high standards and operated as intended, the data that put the spacecraft on the doomed trajectory was flawed. The navigation team at the Jet Propulsion Laboratory used the metric system for its calculations, while Lockheed Martin Astronautics in Denver, who designed and built the spacecraft, provided crucial acceleration data in the English system of inches, feet and pounds. JPL’s engineers assumed the acceleration data (measured in English units of pound-seconds) was in metric measure of force and sent the spacecraft on a doomed and costly flight. While, most data quality mistakes don't end in the fiery destruction of multi-million dollar spacecraft, misunderstood data is costly for today’s businesses. Data is the lifeblood of every company, helping companies work better, work smarter, and reach their target audiences. According to Gartner, modern business intelligence (BI) and analytics continues to expand more rapidly than the overall market, offsetting declines in traditional BI spending. Data quality is still the biggest challenge.  While many companies are investing in BI visualization tools, they are not necessarily applying the same efforts to the data itself. Then companies face frustration and disappointment when the ‘right’ data is not processed. Data Isn't Treated as a Business Asset Even though data is at the heart business decisions, companies don't always handle their data as an enterprise asset. Data may be handled tactically, with databases and applications created as requested by a business unit. Enterprise-wide data dictionaries are rarely applied to enforce consistency on the meaning of fields, and departmental IT teams address issues in isolation from wider business goals. The overall approach is ad-hoc, leading to a fractured data system, leaving the business to question the reliability of their data. Data Fuels Insights...Unless it’s Wrong Companies are often more focused on simply collecting data, losing sight how to ensure the quality of data. Unreliable data undermines business’ ability to perform meaningful analytics that support smart decision-making and efficient workflows. Quality data is required across the organization; for management, operations, compliance, and interaction with external partners, vendors, and customers. Maintaining Good Data Quality What makes good quality data? Data quality is measured by many factors, including: Even a dataset that seems accurate and consistent can lead to poor results when there is missing fields or outdated data. Maintaining high quality data is a real business challenge. It is further complicated by the dynamic nature of different data generation resources and devices, and the enormous scale of data itself. Companies need to confront their data quality challenges, before eroding trust in their data. When trust in data is lost, this is shared, leading to questions by all levels of the organization. Data Quality Case Study: An E-Commerce Company Misses a Key Event Here's a recent example of how data issues led an e-commerce company to make some costly business decisions. The e-commerce company collects event data through its mobile app, feeding data to a central data repository that drives their analytics and customer strategy. Every page and every click is collected for analysis, including tracking when products are added to or removed from a cart, how a user searches, and other user interactions on the site. There are potentially hundreds of events from each page. When a new version of the app was deployed, it had a bug that failed to collect some event data for certain iOS versions. Because of the large volume of data that's collected, the missing data problem wasn't noticed and went unidentified for several weeks. As a result, the business perceived a drop in purchases (while in reality the opposite occurred) and in reaction they increased the marketing budget for a specific product. Unfortunately,  in reality, there was no need for that increased marketing investment and would have been better spent elsewhere. The Neverending Story: Ensuring Quality Data Artificial Intelligence can be used to rapidly transform vast volumes of big data into trusted business information. You can immediately address problems, saving weeks of inaccurately reported data. Renewed trust in the quality of your data directly impacts business priorities. Anodot's AI-powered analytics solution automatically learns the normal behavior for each data stream, flagging any abnormal behavior. Using Anodot, changes that can impact data quality would be immediately alerted on, so that they can be addressed. preventing wasted time and energy and ensuring that decisions are made based on complete and accurate data.
Documents 3 min read

Case Study: Anodot's Useful eCommerce Insights for Wix

Wix needed a real-time alert system that would indicate issues without manual threshold settings in the key metrics. Anodot proved to be the system required for providing the necessary insights to the company's analysts.
Videos & Podcasts 19 min read

Identify eCommerce Losses and Opportunities with Machine Learning

Anodot Sr. Director of Customer Success Nir Kalish presents how eCommerce organizations can leverage the powerful anomaly detection capabilities of Anodot’s AI analyitcs to proactively address business incidents in real time, protect revenue and keep customers happy.
Blog Post 3 min read

AWS re:INVENT – Anodot Joins the AWS Machine Learning Competency Program

As one of the biggest names in technology right now, anything related to AWS is a big deal. We just got back from the sold-out 2017 AWS re:INVENT conference, held this year in Las Vegas, NV. With a record-breaking attendance of more than 40,000 people, it was filled with representatives from tech luminaries such as Accenture, Intel, Deloitte, Salesforce, VMware, and more. Re:INVENT has always been an event for learning. It provides the opportunity for attendees to familiarize themselves with flagship Amazon products such as EC2, S3, Redshift, and more. The hands-on training, networking opportunities, and breakout discussions have always been an invaluable opportunity for professional development – plus there’s always the excitement of being privy to new AWS announcements. As such, we were thrilled to be among the first to know about what, for us, is among the most exciting new developments with AWS – the AWS Machine Learning Competency. The AWS Machine Learning Competency: A Partner Ecosystem focused on AI and ML The market for artificial intelligence (AI) technologies is flourishing. While a broad set of important technologies is emerging, some of these technologies are still in their early stages. The AWS Machine Learning Competency showcases the industry-leading AWS Partners that provide proven technology for a variety of use cases that will help companies deliver ML at scale. Rigorous Testing and Tough Admissions Process AWS set a high bar for admission to the AWS Machine Learning Competency program. To join, enterprises undergo a strict validation of their capabilities, demonstrating technical proficiency and proven customer success. APN Partners must also complete a technical audit of their ML solution. As the AWS Machine Learning Competency datasheet notes, the machine learning competency is meant for partners whose bread and butter is providing machine learning services to their customers, rather than partners who simply use machine learning in the background of their solutions. Out of 150 companies invited to apply for the competency, only 17 were selected in three categories: Data Services, Platform Solutions, and SaaS/API Solutions. And yes, in case you’re wondering why we’ve spent so long talking about this, Anodot was one of just six companies selected under the SaaS/API Solutions category. “Not every machine learning problem requires starting from scratch and building a custom solution, and not all of our customers have access to a dedicated data science team with the time and expertise to build a production workflow for large scale predictions,” said Joseph Spisak, Global Lead for Artificial Intelligence and Machine Learning Partnerships, Amazon Web Services, Inc. “We are delighted to welcome Anodot to the Artificial Intelligence and Machine Learning Competency Program to provide off-the-shelf machine learning solutions that can help speed time to market and bring intelligence to any application.” Unforgettable Event of Amazing proportions We  got to hear VP of Amazon AI at Amazon Web Services, Swami Sivasubramanian, share his insights: "Our goal is to put machine learning capabilities in the hands of all developers and data scientists. The other thing we are excited about is the API services, where people who don't want to know anything about machine learning but want to benefit from the analytics capabilities. I am really excited about all the applications that we can use to improve our everyday life using machine learning." Finally we rounded out our visit with a round a virtual golf and hung out with this great guy, our main contact at AWS, who has a custom tailored Pac Man suit.
Documents 1 min read

Case Study: Lyft Optimizes its Business with Unique Anomaly Detection

Explore how this ride-share leader uses Anodot to identify business risks in real time.
Blog Post 4 min read

Macy’s Black Friday Failure an Industry-Wide Problem…and Could Have Been Avoided

Imagine lining up before sunrise on a cold November morning the day after thanksgiving, waiting in line to buy up to hundreds of dollars in discounted goods – and then being informed that the retailer couldn’t accept your credit card. Would you shop there again? This nightmare scenario is what happened to Macy’s shoppers on November 24th, 2017, aka Black Friday. Around noon, overcapacity issues shut down the retail giant’s payment processing systems, turning them into a cash-only enterprise for up to six hours on one of the busiest shopping days of the year. The blow to company revenue may be calculable, but the damage to the company’s reputation is not. How can companies avoid these expensive and embarrassing disasters? Glitches Are More Common Than You Think Retailers are plagued by glitches, big and small. Even Amazon, probably the most technologically sophisticated ecommerce platform on Earth, is not immune to the occasional error (Free Echo Dot’s anyone?). So how are smaller and less tech-savvy retailers expected to cope? The glitches they encounter could include: High-volume traffic overloading payment processing servers Orders in online shopping carts failing to complete Incorrect pricing for goods sold online or in stores Mistargeted online advertising Missing opportunities to upsell or cross-sell customers based on faulty analytics Did you know that customers are up to 3x as likely to post a negative review of your company after a bad experience? And that 80% of potential prospects will desert your company if they read negative reviews of your products and services? Though it’s hard to put a price on the loss of reputation, the lost sales do stack up, averaging $250,000 per incident during Black Friday and Cyber Monday, and $40,000 per incident during the rest of the year. Smart Error Handling Demands a Proactive Approach Until recently, retailers had few options when it comes to preventing failures in their ecommerce platforms or their brick-and-mortar stores. The traditional approach was to wait for something to break, and then fix it as quickly as possible. In Macy’s case, “quickly” was about six hours. That’s not an acceptable speed. Another option is to try to track everything that is happening with dashboards and alerts, which quickly grows out of hand – how do you know where to look on your hundreds of beautiful visualizations? Which alert is meaningful when you get hundreds every day? One of the difficulties in these scenarios is that similar-looking errors might have diverse causes. Your cash registers might not be able to process credit cards, but that could be due to a failure in any number of separate applications. We refer to these errors as “micro-glitches” – multiple, nearly imperceptible failures in multiple locations which accumulate to cause spectacular outages. Going back to Macy’s, according to news reports they suffered an overcapacity outage of their credit card systems. Even this kind of outage comes with its own subsidiary failures and forewarnings, however. Systems as theoretically robust as the payment processing system for a major retailer don’t fail all at once. There are cumulative warning signs, such as an increasing trickle of transaction failures, or additional latency during card transactions. The ability to detect and interpret these warning signs could have meant averting a system failure at 10:00 AM, as opposed to trying to recover from a crash at noon. Maybe I’m stating the obvious here, but it’s much better and easier to prevent a failure that you identify early than to try to recuperate after the fact. How Anodot Can Help Where does Anodot fit in? Our AI-powered analytics gives companies the ability to collect and interpret real-time time-series metrics from across their payment processing systems and every other internal and external system, letting them detect and mitigate failures before they begin to drain their revenue and reputations. This can protect companies from the types of disastrous events that Macy’s experienced on Black Friday. To learn more about Anodot, and how our technology can give early warning capabilities to your ecommerce platform, sign up for a free demo today.
Documents 2 min read

Case Study: Mobile Gaming Giant Faced Costly Delays For Addressing Cross Promotion Glitches

Find out how Anodot’s business incident detection system automatically alerts the mobile gaming company to any changes in their business data streams.
Documents 1 min read

WHITE PAPER: Extending Competitive Advantage in Telco

Explore this white paper prepared by EMA analysts on five use cases for applying AI analytics in telco services.
Blog Post 6 min read

AI-powered Analytics Illuminates IoT Data for Samsung Artik Cloud

Few things have propelled the Internet of Things’ dizzying growth in recent years as much as machine learning and the innovators who are pushing it. Independent, intelligent machines that can comb through data to make their own decisions are, to some, the only reason such a phenomenon as the IoT can exist in the first place. When it comes to IoT, the connected devices are expected to work with little human intervention. However, no matter how intelligent machines become, human beings still need a way to monitor them, to check that everything is working as planned. Adding machine learning to IoT monitoring tools helps detect problems and anomalies and enhance the analysis for human operators. Monitoring and management systems can not only check the performance, but can also provide real-time visualizations of device activity, irrespective of their locations: robots on factory floors, sensors in shipping fleets or medical equipment in a hospital. IoT needs a system to identify unusual situations and alert when attention is needed, before equipment failure disrupts operations. Industrial IoT Will Transform Many Industries While much of hype around IoT focuses on consumer applications, like smart homes, connected cars and consumer wearables like wristband activity trackers, it is the IoT’s industrial applications which may ultimately dwarf the consumer side in potential business and socioeconomic impacts. The Industrial IoT stands to transform many industries, including manufacturing, oil and gas, agriculture, mining, transportation and healthcare. Collectively, these account for nearly two-thirds of the world economy. IoT Interoperability is critical for maximizing the value of the Internet of Things. According to a McKinsey report “On average, 40 percent of the total value that can be unlocked requires different IoT systems to work together.” With its open APIs, Samsung ARTIK Cloud breaks down data siloes between devices and enables a new class of IoT applications and services. By connecting directly to ARTIK Cloud, Anodot provides a layer of analytics and real-time detection of incidents to the collected data. AI-powered Analytics Automatically Monitors and Turns IoT data into Insights Anodot analyzes the millions of data points that stream into ARTIK Cloud from various IoT sensors in homes, factories or other IoT implementations. Anodot is disrupting the traditional business intelligence industry with its AI-powered Analytics solution. Anodot’s proprietary machine learning algorithms learn the normal pattern of behavior from the real time event data streaming into ARTIK Cloud, and detects anomalies from the spikes and dips in IOT real time data, sending alerts about any metrics that deserve greater attention. Anodot scores each anomaly based on how far “off” it is from the normal, correlating multiple related anomalies to avoid alert storms and aid in determining the root cause of any issue encountered. Disrupting the static nature of Business Intelligence (BI) tools BI tools are generally designed to help highly analytical individuals make very specific decisions, they are backward-looking, and they lack the ability to provide actionable information to front-line analytics teams. Anodot is disrupting the static nature of the Business Intelligence (BI) market, differing in several key areas: Traditional analytics and BI solutions deal with historical data, not this minute, not showing a real-time status. Due to these limitations, they typically look at only a subset of all the available data, yielding at best delayed and at worst incomplete results. Traditional BI tools that monitor data cope with just part of a problem, only focusing on the data they think they might need, while specific signals could get overlooked. Traditional BI tools struggle to get an integrated view of all business metrics, focusing on just a few key metrics Anodot analyzes streaming data in real time and predicts the future behavior of each metric. Anodot automatically identifies what is happening and can ingest all metrics, focusing on just the important ones. Anodot applies algorithms to large volumes of data in a more efficient manner to discover patterns or trends in the data — a task that BI tools were not designed to accomplish. Predictive Maintenance Prevents Breakdowns The Internet of Things can create value through improved maintenance. With sensors and connectivity, it is possible to monitor production equipment in real time, which enables new approaches to maintenance that can be far more cost-effective, improving both capacity utilization and factory productivity by avoiding breakdowns. Predictive maintenance and remote asset management, can reduce equipment failures or unexpected downtime based on current operational data. Vastly improved operational efficiency (e.g., improved uptime, asset utilization) through predictive maintenance and remote management improve operational efficiency through predictive maintenance, and achieving results such as savings on scheduled repairs (12%), reduced maintenance costs (nearly 30%) and fewer breakdowns (almost 70%). In the screenshot below, Anodot monitors factory data generated by IoT sensors. All machine parameters are tracked and learned in real time, correlating metrics temperature, vibrations and noise. Anodot identifies several anomalies, possibly indicating a problem.   Outlier Detection Makes Proactive When Maintenance Possible Anodot can also be used to compare the performance of similar things, and detect outliers. As the sensors and components become more prevalent in industrial environments, it is possible to collect data from multiple industrial IoT components and correlate a particular component behavior with similar components. Anodot can pinpoint outliers not just within the data for one machine, but for multiple machines (input changes like increased temperature). This will significantly improve predicting the likelihood of equipment failure or a need for unscheduled maintenance to maintain equipment efficiency. Remove Seasonal Fluctuations and Expose Real Trend In Underlying Data Machine data has a seasonal component, cyclic patterns in observed data over time. For example lights that are turned on in the evening, turned off at night, and then on again in the early morning. On the weekends the behavior may be different. These and other patterns make it difficult for a human user to identify static thresholds to set for manual alerts, and in fact make most static thresholds irrelevant. Anodot, however, automatically learns how the data behaves, including all of its patterns and seemingly random behavior, correlating between external events (like holidays or weather changes) and the collected data metrics. Making Sense of Your ARTIK Cloud Data Diverse devices and “things” continuously send and receive data to ARTIK Cloud. Anodot helps make sense of what is happening all in real time. Without having to set any thresholds or even understand how the data is supposed to behave, Anodot can provide automated, pre-emptive alerts. Learn how Anodot is revolutionizing IoT data in our new partnership with Samsung Artik Cloud