Four Risks And Challenges Of Ai Democratisation For Businesses
Bill Gates wrote, “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet and the mobile phone.”
Although artificial intelligence (AI) technologies have been around for quite some time, it is only recently (with ChatGPT) that they have gained wide accessibility. This is because, up until now, we have not been able to interact with AI in a conversational manner and witness human-like responses. AI is not some benevolent, abstract cloud of computer power. It is rooted in the real world and has real-world implications. Let’s look at some of the potential risks and challenges organisations could experience as they increasingly build, use or rely on AI technologies.
1. Data honesty and purity
One of the biggest challenges with AI is that organizational leaders have little or no understanding of the data that sits behind it, how AI is trained or how it behaves in certain situations. This is where the danger lurks—the trust, uncertainty and the inability to validate AI-generated responses.
Let’s say you take a small subset of data, put it through a model and get an expected result. But what if you give AI a much larger, much messier data set? How do you determine whether AI is giving you the correct information or whether it is producing a false statistic? If you are using AI to make split-second decisions, how do you keep track of its integrity? If organizations jump in too deeply or move too quickly into using these types of large language models without recognizing the risks, then they might reach a point where it becomes difficult to revert.
2. Contextual Understanding
Common sense and contextual understanding often come naturally to humans; however, AI does not display these kinds of attributes. For example, imagine that a driver spots an emergency police vehicle. Even though this exact vehicle is one the driver may not have seen before, the driver will immediately understand or recognize the context (based on the vehicle’s flashing lights, the type of car or how it is driving) and make an instinctive response to yield or slow down.
AI cannot do that; if a self-driving vehicle encounters an odd situation for which it was never programmed, it will likely stop and wait for the scenario to resolve itself or wait for the driver to take over control and confirm whether it is safe to proceed. Contextual understanding is a critically important aspect in any kind of decision making and until that is addressed, AI will remain a risky application for certain implementations.
3. Biases And Ethical Concerns
For AI algorithms to function they must train on vast amounts of data sets. If there are inherent biases in how data are collected and analyzed, then this could lead to errors in judgment.
For example, there are current discussions on how AI could be conceivably applied to the U.S. justice system. This means training an AI model on historical court decisions. If decisions made in the past were biased, then this will mean the model will also suffer the same biases. Multiple cases have found AI algorithms to be racist and sexist. How can we ensure that people using an AI system are using it in an ethical and principled manner? How does one ensure fairness and accountability?
4. Cyber Threats And Data Poisoning
Threat actors have already started leveraging AI to craft sophisticated phishing attacks and synthetic media (such as deepfakes and voice clones) to deceive targeted victims. ChatGPT has been used to design polymorphic malware as well as infostealers that can evade modern security controls. Cybercriminals are increasingly trying to weaponize AI tools, which is why conversations on ChatGPT are one of the hottest discussions on the dark web.
Poisoning data sets can have massive downstream implications. If adversaries somehow learn how the AI model functions, they can influence the algorithm and trick the system to behave in a desired, nefarious way. Potentially, a scenario could arise in which the victim realizes, after a long delay, that the AI model was sabotaged, leading to foreseeably dire consequences.
Taking A Cautious Approach To AI
For the business leaders interested in using AI, I suggest to consider the ethical implications. If the data that an AI system trained on is biased, incomplete or inaccurate, the AI system will be as well. This can lead to errors in judgment and discrimination. AI can be used for good societal means (like drug discovery) or for nefarious ends (like deepfakes and cyberattacks). Like any software application, it’s important to monitor and maintain, test and ensure it is performing as expected. Be transparent about your use of AI. Let people know when AI is being used and how it is being used responsibly. This can help build trust and confidence.
Bill Gates wrote that the age of AI has only just begun. It is obviously a huge technological leap forward and there are bound to be controversial ways in which the technology can be harnessed or abused. Governments, regulators and businesses must therefore recognize the inevitable risks that AI poses and determine how best to ensure impartiality and make it more secure and transparent.
Source: Information Security Forum Ltd(ISF)