Skip links

To AI or not to AI? Is that the question?

AI Stock

The answer is, “No, that’s not the question”. You either will jump on the AI bandwagon, or you will be left behind. The question is therefore not “if”, but “how” to AI. More specifically: “How to adopt AI responsibly and securely”?

I remember the moment well: It was December 2022, when during a discussion group of a Metaverse course (which was taking place in the Metaverse) someone demonstrated a new AI tool that had come available. It could answer any question, take any exam, and write any python (or other) code for us – it was stunning. Back then, the Metaverse was a new thing and was touted as the most promising technology. Then, unannounced, we all suddenly had a free version of ChatGPT 3.5. Frankly, it stole the Metaverse’s thunder and ChatGPT became the fastest growing consumer application in history, having hundreds of millions of users just 2 months after its launch.

It took only months before the business world started grappling with a seemingly existential dilemma: “To AI or not to AI?” While AI is not new, the research in the field of AI started in 1956, and large language models (LLMs) based on absorbing large quantities of texts initiated in 2017, the success of ChatGPT made it clear that the integration of Artificial Intelligence (AI) into our lives will proof to be inevitable.

1. Proceed Safely

Unstructured adoption of AI is not without its pitfalls, especially when it comes to cybersecurity. As AI systems become more intricate and pervasive, they also become enticing targets for cybercriminals. This requires regularly updating AI algorithms, ensuring data security measures are in place, and being wary of potential AI-driven phishing or hacking attempts. For businesses, it’s imperative to conduct regular security audits, penetration testing, and to stay updated with the latest in cybersecurity to protect their AI-driven assets.

2. Risk of Unintended Data Leakages

As AI systems process both automated and manually entered data, there’s a heightened risk of unintended data leakages. This can lead to breaches of privacy and can potentially lead to reputation damage. To combat this, companies must adopt strict data handling and processing methods. Using techniques such as differential privacy can help in sharing insights from data while guaranteeing individuals’ data privacy. When properly implementing AI, it is a balancing act between harnessing the power of data on the one hand and data governance on the other hand. Processing personal, confidential, or sensitive data to general AI models must be avoided.

3. Over-reliance Risks

While AI can significantly enhance efficiency and productivity, over-reliance can be detrimental. There’s a subtle difference between using AI as a tool and becoming completely dependent on it.

For instance, while AI can provide remarkable insights through data analysis, human intuition and creativity shouldn’t be overshadowed. We must remember that AI systems are only as good as the data

they’re trained on, and they can’t account for unpredictable, real-world changes or human nuances that don’t fit into their models. A collaboration between human judgment and AI can lead to more balanced and effective decision-making.

4. The AI Model Echo Chamber Effect

Finally, although it is difficult to take measures to avoid it, there is a great risk of a Model Echo Chamber Effect in AI. If in fact our primary source of new information becomes the output from LLMs, and if these models are then circularly trained on their own LLM output, we will enter a diminishing feedback loop of over-optimization, diversity, , novelty, reinforcement of errors and ultimately leading to the stagnation of knowledge development.

  • Like the risks of overfitting machine learned data, LLMs trained predominantly on LLM-generated content could become overly optimized for that kind of content, potentially making it less effective to be of analytical value.
  • Diversity and Novelty are unique characteristic of human brains which are influenced by our biological senses, and creates a vast array of experiences, emotions, cultural contexts, irrationality, and more. If LLM outputs are the primary source of new content, we might see a decrease in diversity and true novelty or innovative thought in the generated content.
  • A process of reinforcement of errors, which can already be seen in the social media sharing culture, will cause propagation and amplification of factual incorrect views, thoughts and ultimately feelings, which will result in arbitrary truths and opinions that are not based on factually correct views.

The above highlighted risks can ultimately lead to stagnating knowledge Evolution. One of the strengths of human knowledge is its evolution and adaptability based on new experiences, discoveries, and cultural shifts, out-of-the-box thinking, controversy seeking personalities and accidental discoveries. A tight LLM feedback loop might slow this evolution, creating an efficiently operating but static knowledge base.

Where do we go from here?

By addressing the challenges of AI head-on—ensuring robust cybersecurity, preventing data leakages, and maintaining a healthy balance between human intuition and machine efficiency—we can pave the way for an AI-driven future that’s both productive and secure. Keep an eye on your API usage metrics to detect any unusual or unexpected requests which might indicate a misconfiguration or security issue.

If Shakespeare were a tech enthusiast in today’s world, he may have offered us this cautionary phrase from his famous Hamlet: “…rather bear those ills we have, than fly to others that we know not of”?