Lucene search

K
trellixTrellixTRELLIX:26D5DB6F5EB69D387F6B92C94D2E5196
HistoryAug 07, 2023 - 12:00 a.m.

AI is the Solution, Not the Problem

2023-08-0700:00:00
www.trellix.com
3
artificial intelligence
machine learning
security solution
data leakage prevention
cyber security
ai/ml attacks
regulations
technology opposition
backup plan
amish community

6.7 Medium

AI Score

Confidence

High

AI is the Solution, Not the Problem

By Trellix · August 07, 2023
This story was also written by Oded Margalit.

AI (Artificial Intelligence) / ML (Machine Learning) has recently been painted as the master evil. In this blog I would like to suggest a different view, where we can use it to make a better future.

As Yuval Noah Harari wrote, 10,000 years ago the wheat domesticated humans. We now enjoy the food that enabled us to become 8 billion people on earth, but it came with a price. Are we going to view AI similarly in the future, as the revolution that made our lives easy but created dependency?

There are three ways to deal with the risk: “Adopt the tech,” “Oppose it,” and “Create a plan to live without it:”

  1. Adopt the tech
    Do your best to protect the technology. A good example is a DLP (Data Leakage Prevention) solution, which helps our users ensure that sensitive information does not leave the organization. It can help guard against leakage to ChatGPT engines, in addition to Google translate and other 3rd party solutions. In the cyber security domain, it is harder to guard AI/ML technology against attacks, since we are working against a malicious adversary who uses even illegal methods. See NIST (National Institute of Standards and Technology), Microsoft, MITRE, and even recent OWASP (Open Web Application Security Project) on LLM (Large Language Model) publications for specific examples. A cautionary word: we had two “AI winters.” If we have a third AI winter, all the investment in protecting unused technology would become redundant.
  2. Oppose the tech
    Be like the Luddite and oppose it. I am afraid of this way – I see storm of regulations that might throw the baby (advantages of the technology) out with the bathwater. The BLUEPRINT FOR AN AI BILL OF RIGHTS states: “Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community.” This opposition would not only forbid military use, but also autonomous driving and even automated elevators since it has been known to cause human fatalities.
    In the cyber security domain, overregulation would not solve the problem of bad actors using AI/ML technology for nefarious purposes. They may as well add another illegal activity to their list. See the Al-Hayat Al-Qaeda story where the terrorists abused YouTube policies and forced the journalist into hiding. Premature regulation can lead to things like The Great Horse Manure Crisis of 1894, where smart people tried to get ready for a future disaster which did not materialize.
  3. Live without it
    Create a backup plan, to be able to live without technology, like the Amish do. This way can be a balance, if someone is afraid that we might lose electricity (Nuclear war / EMP (Electro Magnetic Pulse)). Then having manual backup procedures is a promising idea. Whether or not you agree with the Amish rejection of technology, in a sense, they are like an insurance policy against some catastrophes. To clarify: Amish are not protected against nuclear weapons, but they are less susceptible to the destructive effect of an EMP which will destroy all electronics.
    Some EU (European Union) regulations may lead to creating an Amish-like environment, like More Penguins Than Europeans Can Use Google Bard. Even if you try to use “old” technology, you are still vulnerable to AI/ML attackers, like ChatGPT generated phishing attacks. Beware of unexpected results, like the cobra effect. For example, a common way to avoid being surveilled by technology is to opt out. If you add your car to the “do not track” list, which takes it out of all traffic control cameras, you may find yourself waiting forever at a red traffic light, because the automated system would not know that you are waiting…

I argue that we must be Technology Neutral. If you argue that 14th amendment rights are violated when images leak from the courtroom, then you shall not only ban cameras, but also hand-written drawings as well. Using non-ML models should still make you liable for anti-bias rules. If you relax employers’ liability to their employee’s wrongdoing by showing that you have trained them – you can do the same for ML (see Model for Algorithmic Transparency (Hebrew)).

All the challenges above are not trivial, and Trellix might be unable to completely resolve them all now. But as a thought leader we would like to start the discussion. Let us start by framing some problems:

  1. Can we extend the DLP definition to cover trained model leakage even when the way we extract it is by model inversion attack? Since Trellix’s MER (Minimum Escalation Requirements) tool has sanitization feature, can we also add ML model sanitization that will make it harder to deduce secret data samples used to train it?
  2. We have anti-malware tools that use static and dynamic analysis to classify processes as malicious. AI/ML introduces a new challenge; see IBM’s DeepLocker story. Can we extend the anti-malware engine to be able to peer into ML models? Note again, that the fact that we are working against malicious actors makes the AI-ML regulation concept almost useless – we can forbid using concealed algorithms like Deep-Locker, but only the good guys will comply. ☹
  3. Trellix has a URL categorization solution which uses ML models, and allows users to update us on the correct classification which opens a door to data poisoning attacks which might drift the model. What is the right way to defend against such attacks?
  4. Etc.

I believe that AI is not the problem. It is the solution. Trellix has been using advanced tools for decades, and I am sure that we will also address the hard challenges above with similarly advanced solutions.

If we want to allow for automated systems, we should relax IBM’s saying from 1979 and use the best precautions we have to reduce the risk, but understand that just like 10,000 years ago, we are knowingly deciding to move forward and use the best precautions we have to reduce the risk, but understand that just like 10,000 years ago, we are knowingly deciding to move forward.

6.7 Medium

AI Score

Confidence

High