Joanna Peña-Bickley | JoannaPenaBickley.com

AI Hallucinations

Joanna Peña-Bickley
8 min readJul 9, 2023

Unveiling the Illusions of AI, the Next Frontier in Design

During a recent captivating talk I had the privilege of delivering at AWS, the pursuit of catching up with the remarkable advancements in AI became abundantly clear within our esteemed Design community. As a pioneer in the generative AI and design space, I have always believed in leading the charge and ensuring our community embarks on this transformative journey together. It was during the engaging Q&A session that I found some of my favorite moments, particularly when discussing the realms of AI hallucinations — the captivating illusions meticulously crafted by the genius of Ai. Hallucinations hold profound implications for the world of design and creativity. In this blog post, I attempt to unravel the mysteries and unlock the immense potential residing within these AI-generated hallucinations.

Artificial Intelligence (AI) has rapidly progressed in recent years, revolutionizing various aspects of our lives. From voice assistants (eg. Alexa) to self-driving cars, AI systems are becoming increasingly sophisticated. However, as AI evolves, it sometimes enters a peculiar state known as “AI hallucinations.” Over the last month I have explored how Designers of Ai products can solve for AI hallucinations, exploring their causes, examples, and the extent to which people can be deceived by them.

Understanding AI Hallucinations:

AI hallucinations occur when a machine learning model, typically a neural network, generates output that deviates from reality or exhibits creative, imaginative patterns. These hallucinations can manifest in various forms, ranging from visual imagery to text generation. While AI hallucinations are intriguing, they also pose certain challenges, as they may inadvertently mislead users or generate misinformation.

Recent Examples of AI Hallucinations:

  1. DeepDream: One of the most well-known examples of AI hallucinations is Google’s DeepDream project. DeepDream uses neural networks to analyze and modify images, often resulting in surreal and dream-like visuals. By amplifying certain patterns and features in an image, DeepDream creates mesmerizing hallucinatory compositions that captivate the imagination.
  2. Text Generation: AI models trained on large datasets can sometimes produce coherent yet nonsensical text. For instance, OpenAI’s GPT-3 language model, on rare occasions, generates passages that may sound plausible at first but quickly devolve into absurdity or incomprehensibility. These textual hallucinations can range from entertaining to bewildering, highlighting the unpredictable nature of AI.
  3. Hallucinated Case Law: In the realm of AI hallucinations, intriguing and unforeseen situations can arise, sometimes resulting in unintended consequences. One such incident occurred in New York when a lawyer unintentionally submitted a court brief containing phony legal precedents generated by ChatGPT, an advanced AI language model. This example sheds light on the potential pitfalls of AI hallucinations and their implications in the legal domain.

Extent of Deception:

AI hallucinations can occasionally fool users into believing they are genuine representations of reality. The level of deception varies depending on the complexity of the hallucination and the viewer’s susceptibility. While AI hallucinations often produce captivating and imaginative outputs, most users can discern their surreal nature. Nonetheless, there have been instances where individuals mistake AI hallucinations for authentic content, leading to unintended consequences.

What is the difference between a deep fake and an AI Hallucination:

While both hallucinations and deep fakes involve the creation of synthetic content, there are distinct differences between the two:

  1. Nature of Content: Hallucinations typically refer to the generation of artificial content by an AI model that deviates from reality or exhibits creative patterns. These hallucinations can manifest in various forms, such as text, images, or even audio. They are often a result of the AI model extrapolating patterns learned from training data and generating imaginative outputs.

On the other hand, deep fakes specifically refer to the manipulation or alteration of existing content, typically using AI techniques such as deep learning. Deep fakes are created by superimposing or replacing elements within videos or images to make them appear real but depict events or scenarios that did not actually occur. They are used to deceive or mislead viewers by presenting fabricated information or manipulating someone’s likeness.

  1. Intent and Manipulation: Hallucinations are typically unintentional and arise from the creative capabilities of AI models. They can be unexpected outputs generated by the model, often without any malicious intent. Hallucinations are more exploratory and imaginative in nature, reflecting the AI model’s attempt to generate content based on patterns it has learned.

Deep fakes, on the other hand, are deliberately created with the intent to deceive or manipulate. Their purpose is often to spread misinformation, impersonate individuals, or create fabricated scenarios. Deep fakes require deliberate manipulation and editing of original content to create a false representation, often with the aim of fooling viewers into believing the content is real.

  1. Data Source: Hallucinations do not rely on specific source data since they are primarily based on the internal patterns learned by the AI model during training. The outputs are a result of the AI model’s interpretation and extrapolation, rather than being tied to specific input data.

In contrast, deep fakes heavily rely on existing data, such as images or videos, to manipulate and modify the content. They require a source dataset from which the AI model learns and extracts the necessary features and patterns to create the altered content. This dataset usually consists of real footage or images of the target individual or scene that will be manipulated.

  1. Authenticity and Verification: Hallucinations are often surreal or creative in nature, making them relatively easier to identify as synthetic or fictional. While they can sometimes resemble real content, they tend to have noticeable deviations or imaginative elements that differentiate them from reality. The focus when dealing with hallucinations is usually on understanding and interpreting the generated content, rather than verifying its authenticity.

Deep fakes, on the other hand, are designed to closely mimic real content, making them more challenging to detect and verify. They often require specialized techniques and tools to analyze and identify the subtle artifacts or inconsistencies that indicate manipulation. Efforts are being made to develop technologies that can detect and authenticate deep fakes to mitigate their potential negative impact.

The key differences between hallucinations and deep fakes lie in their nature, intent, data sources, and verifiability. While hallucinations are unintentional and imaginative outputs generated by AI models, deep fakes involve deliberate manipulation of existing content with the intent to deceive. Understanding these distinctions is crucial in addressing the challenges posed by both phenomena and developing appropriate countermeasures.

What Causes An Ai Hallucination:

AI systems can experience hallucinations due to various factors, including the complexity and limitations of the underlying algorithms and training data. Here are a few key causes of AI hallucinations:

  1. Training Data Bias: AI models learn from vast amounts of training data, which can contain biases or skewed patterns. If the training data contains incomplete, misleading, or unrepresentative information, the AI system may generate outputs that deviate from reality. These biases can influence the hallucinatory patterns that emerge in the AI-generated content.
  2. Overfitting and Lack of Generalization: AI models aim to learn patterns and generalize from the training data to make predictions or generate outputs. However, in some cases, the model may become overfitted to the training data, meaning it becomes too specialized and fails to generalize well to new or unseen data. This can result in hallucinations as the model attempts to generate content based on specific patterns it has memorized from the training data.
  3. Complex Model Architectures: Deep neural networks and other advanced AI architectures are highly complex systems with numerous interconnected layers. The interactions and transformations happening within these networks can sometimes lead to unintended side effects or amplify certain patterns in the data, resulting in hallucinatory outputs.
  4. Insufficient Training or Fine-tuning: AI models require extensive training on large and diverse datasets to develop a robust understanding of the patterns and relationships within the data. If the model is not adequately trained or fine-tuned, it may exhibit hallucinatory behavior due to insufficient exposure to real-world variations or lack of context.
  5. Noise or Incomplete Information: AI models aim to make sense of noisy or incomplete data, but in some cases, they may fill in the gaps with hallucinatory content. This can occur when the input data contains ambiguous or contradictory information, forcing the model to make assumptions or generate synthetic content to fill in the missing details.

Most Ai or LLMs makers do not report on how often they’re systems hallucinate, limiting our ability to size or understand the scale of the problem space. That stated, there are recent high profile instances where lawyers, doctors and even news organizations have fallen victim to the allure of AI-generated content. These examples highlight the potential for AI hallucinations to blur the line between real and artificial, challenging our ability to distinguish between them.

Designing Solutions That Address the Challenges:

As AI hallucinations become more sophisticated, it is crucial design leaders step up to address the challenges they pose. AI hallucinations present challenges in distinguishing between real and artificial content, designers of AI products have a crucial role to play in addressing these issues. By implementing specific strategies and features, they can help users understand when an AI is hallucinating and when it is providing accurate information. Here are some ways designers can tackle these challenges:

  1. Clear Indication of AI-Generated Content: Designers should ensure that AI-generated content is clearly labeled or indicated as such. Whether it’s an image, text, or any other output, providing a visual or textual cue that indicates the involvement of AI can help users recognize and interpret the content appropriately.
  2. Confidence Scores and Uncertainty Metrics: AI systems can provide confidence scores or uncertainty metrics alongside their outputs. These measures indicate the level of certainty or confidence the AI model has in its generated content. By displaying these scores, users can assess the reliability and potential hallucinatory nature of the output.
  3. Training Data and Bias Disclosure: Transparency regarding the training data used to train AI models is crucial. Designers should make information about the datasets and their biases accessible to users. By disclosing the sources, size, and potential limitations of the training data, users can better understand the context and potential biases that might influence the AI’s output.
  4. Human-AI Collaboration: Enabling collaboration between AI systems and human users can enhance the user’s understanding of AI hallucinations. By incorporating feedback loops and iterative refinement processes, users can actively participate in refining and validating AI-generated content. This collaborative approach promotes transparency and helps users differentiate between hallucinations and accurate outputs.
  5. Data Augmentation and Diversity: Designers can introduce augmented or demand diverse training data to improve AI models’ understanding of the real world and reduce the likelihood of hallucinations. By incorporating data from various sources, including “edge cases” and unusual scenarios, designers can enhance the model’s ability to handle complex situations and minimize the generation of hallucinatory outputs.
  6. Education and User Awareness: Designers should prioritize educating users about the limitations and capabilities of AI systems. Providing clear documentation, tutorials, and guidelines on how to interpret AI-generated content can empower users to make informed judgments. By promoting user awareness, designers can mitigate the risk of users being misled or deceived by AI hallucinations.
  7. Continuous Monitoring and Improvement: AI systems should be continuously monitored for hallucination tendencies and potential biases. Designers can implement mechanisms to detect and flag hallucinatory outputs, allowing for prompt intervention and improvement. Regular model updates and refinements based on user feedback and real-world data can help enhance the accuracy and reliability of AI systems.

Designers of AI products play a crucial role in addressing the challenges posed by AI hallucinations. By prioritizing transparency, providing confidence scores and uncertainty metrics, disclosing training data and biases, fostering collaboration, promoting user education, and implementing continuous monitoring, designers can empower users to understand when an AI is hallucinating and when it is providing reliable information. With these design strategies in place, users can navigate the complexities of AI systems with confidence, harnessing their potential while being mindful of their limitations.

--

--

Joanna Peña-Bickley
Joanna Peña-Bickley

Written by Joanna Peña-Bickley

Artist, Activist, Inventor, Designer of intelligent things that are useful, usable and magical. | https://joannapenabickley.com/