Identify protection of critical AI assets – Machine learning algorithms and data.

Prevent engineering attacks against AI Services – cloud based or on premise AI services. In the past we have demonstrated real threats to compromise a cloud-based AI service through software vulnerabilities.

aisecuritystrategyblogA well designed AI system can significantly improve the bottom line for a Business. However, there is a potential for damage financially or to reputation of epic proportions if these systems are deployed without due care.

AI systems are penetrating all industry sectors. They are enabling businesses in a number of ways by increasing business sales, creating marketing opportunities at scales and granularity not possible before, data based insight values, improving value to customers and increasing overall agility. Applications based on machine learning vary by industries:

aiindustries1

What can be the impact of a Misused or Flawed AI Algorithm?

A well designed AI system can significantly improve productivity and quality and in some cases may be the only option. However, depending on the industry there can be different negative outcomes with flawed algorithms.

Financial: Flawed algorithms may excessive risk taking or acting on erroneous decisions. One precedent, provides an insight into the impact of such a risk. Knight Capital Group.

In the mother of all computer glitches, market-making firm Knight Capital Group lost $460 million in 30 minutes on Aug. 1, 2012, when its trading software went into production with an untested change to its high-frequency trading algorithm. That’s four times its net income from all of 2011.

With its high-frequency trading algorithms Knight was the largest trader in U.S. equities, with a market share of 17.3% on NYSE and 16.9% on NASDAQ.[2] The company agreed to be acquired in December 2012 after the incident.

Legal: Flawed algorithms may lead to regulatory penalties based on incorrect legal advice.

Medical: Today’s machines are capable of crunching vast amounts of data and identifying patterns that humans can’t. So far, computers have gotten really good at parsing so-called structured data—information that can easily fit in buckets, or categories. In health care, this data is often stored as billing codes or lab test values. But this data doesn’t capture patients’ full-range of symptoms or even their treatments. Unstructured data such as Images, radiology reports, and the notes doctors write about each patient can be more useful. This requires making inferences and a certain understanding of context and intent – an area that is maturing but has to be adopted with a full understanding of capabilities and restrictions.

How do we overcome these risks?

The very nature of AI based systems makes errors have higher implications and more difficult to roll back without major business impact than traditional rule-based systems.

It is critical that businesses identify the risks understand the risks and manage these risks so that they can benefit from the immense value AI provides. The four critical steps they can take to do this are:

  1. Define a clear strategy on the expectations and value from the AI systems. This should be clear and approved by the board.
  2. Perform a risk assessment that highlights the financial, regulatory, brand reputation implications from a malfunction in the AI system.
  3. Recognize clearly that the Business requires a strategic Security & Privacy posture for the AI system to fully transform the business.
  4. Intelligent systems have to be Cyber resilient to support the intelligent systems, without which there is more potential for a negative impact as opposed to the intended positive value from AI systems.

About me: I have been helping companies with Cyber Security Program definition and Strategy at major global brands for over 20 years. I have a background in Artificial Intelligence, Security of IoT, and integrating security into product design. Please connect with me through email at pamela.gupta@outsecure.com, on LinkedIn or Twitter.