Back to OmniBlog

Dark Patterns in AI: What they are and why shouldn’t they be taken lightly

Published:

August 20, 2024

Topic:

Insights

The definition of dark patterns 

Dark patterns (also known as deceptive patterns) are deceptive user interfaces employed by e-commerce to manipulate users’ behavior into making decisions that benefit the company but not necessarily the user. They are often highly unethical. The term was first coined in 2010 by UX specialist Harry Brignull, who launched DarkPatterns.Org to shame offenders, but the malpractice has been going on much longer than that. Dark patterns are designed to be misleading, such as exploiting cognitive biases and utilizing various methods to guide users towards actions that may not be in users’ best interests, such as making unintended purchases or sharing personal information. With online shopping empires expanding and regulatory control intensifying on consumer protection, detecting and identifying dark patterns has become crucial. 

They exploit our psychology and habits to push us to take actions that we ordinarily wouldn’t take, such as:

  • Signing up for subscriptions we don’t intend to keep (“free trial” with confusing cancellation options)
  • Sharing more personal data than we realize (pre-checked boxes for data collection)
  • Spending more money than we planned (hidden fees or confusing pricing structures)

They take advantage of our desire for convenience, our tendency to trust authority figures, and our aversion to missing out. Luiza Jarkovsky defined dark patterns in AI as “AI applications or features that attempt to make people:

  • believe that a particular sound, text, picture, video, or any media is real/authentic when it was AI-generated (false appearance)
  • believe that a human interacts with them when it’s an AI-based system (impersonation).”

By that definition, Jarovsky recognizes two kinds of AI dark patterns: false appearance and impersonation. If they are to be defined, false appearance is the kind of AI-generated content that users can’t tell if it is AI-generated or manmade. For text files, we have several software programs that can detect AI-generated content, but the real trouble is with graphic and audio content, where it is pretty easy to fool almost any human.

Dark patterns are petrifying in that they can exploit a child’s online presence to achieve some vicious goal. AI can analyze a child’s online behavior and generate an AI actor who poses as some fake online friendship to manipulate the child into revealing personal information or even giving their location to strangers. 

Let’s look at a few examples to make things more transparent. For instance, e-commerce could use AI-powered software to create a sense of fake urgency in webshops, like “only 1 item left” or “5 more people are looking at this product right now.” This notification can pop up even if there are plenty of articles in stock or when nobody is looking at the exact item as you are. This notification aims to create an impression of urgency, and by doing so, it is persuading clients to make a purchase instantly, triggering the fear of missing out (FOMO). An example of a false appearance is misleading reviews or ratings in which AI-generated software creates fake positive reviews for products, services, or content. These reviews seem to be from real users, but they are generated to create a deceptive favorable image. This manipulates users into thinking the product is better quality than it is. When discussing impersonation, some websites are using chatbots that mimic the behavior of live customer service agents when, in fact, they are interacting with an automated system that responds with limited abilities. The main issue appears when the user needs a more personalized answer or has a specific problem that requires an individualized approach. Moreover, this might be a problem because when clients interact with that chatbot, they might share more personal information that they otherwise wouldn’t share with AI. Another great example is AI impersonating bank managers, which could entice you to reveal sensitive information or click on malicious links. 



Legal view of dark patterns 

EU AI Act tries to prevent dark patterns in AI, so they shared a few pieces of advice that should be implemented to avoid such behavior. The first one is probably the most logical, and it states that it should always be stated when a person is interacting with artificial intelligence (“Natural persons should be notified that they are interacting with an AI system unless this is obvious from the circumstances and the context of use.” ).  The EU AI Act suggests that users should have more control when interacting with AI and that they should be the ones to decide how much AI interaction they want. The next recommendation is to mark AI-generated content as AI to differentiate it from legitimate human-made content. (“Users who use a system to generate or manipulate image, audio, or video content that appreciably resembles existing persons, places, or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labeling the artificial intelligence output accordingly and disclosing its artificial origin. “)

The psychology behind dark patterns

The psychology behind dark patterns emphasizes the urge for an authorized and administrative framework to acknowledge and reduce unethical design practices. Dark patterns blur the line between informed consent and manipulated action by preying on cognitive vulnerabilities. As a result, they can violate users’ rights to make independent decisions in digital spaces. Various jurisdictions recognized the harmful effects, so they have started diving deep into the rules that hold companies responsible for their design choices. This practice balances creativity and user protection, pushing for greater transparency and ethical considerations in interface design across industries.

Dark patterns are established in a profound knowledge of human psychology. Designers implement psychological and thought-related tendencies to influence user behavior. Techniques like forced continuity, deficiency, and deception exploit our inadequacies. These patterns, which often work against the users’ interests, sabotage rational decision-making by alluring our need for immediate satisfaction or our fear of missing out.


Common dark patterns and how to avoid them 

Roach motel

Roach motel presents a service that makes it easy to sign in but difficult and impossible to sign out or exit. 

Examples of Roach motels are complicated cancellation processes, in which some services make it extremely hard to cancel a subscription. For example, you might need to call a customer service number during limited hours or navigate through a maze of online forms and verification steps.

How to avoid: As a giver of web services, make it easy to cancel any subscription or service you provide. Instead, you could offer more, such as a discount, to persuade clients to stay. 

Hidden costs


Hidden costs are a dark pattern that hides the actual, more expensive cost of your shopping cart until the final stage of your purchase. This might include shipping, service fees, etc. 

This dark pattern can easily be avoided if you are blunt and honest with the prices of your products and services. Clients value honesty above everything, and once they figure out you took their money, they will unlikely return. 

Nagging

Nagging manifests when a website continuously disturbs the user with pop-up ads, auto-playing audio, or anything else that distracts the user who came to your website. 

Nagging can easily be avoided by placing ads in the margins of the page or waiting for pop-ups a few minutes after opening the site. 

Disguised ads are advertisements developed to blend in with the rest of the website to deceive the users into clicking on them. Instead, the right thing to do would be to contrast ads so users can see what they’re clicking.

Interface interference

Interface interference is another kind of dark pattern scheme with an interface designed to make some choices and preselect options for you, like checking some boxes you didn’t want to click.  If recognized, the user needs to deselect it, which is ineffective for user experience. Sometimes, these preselections are hidden in a drop-down menu.  This can be avoided by allowing users to make their own choices and not forcing preselected content on them. 

Forced action

Forced action is similar to interface interference but a bit worse because it doesn’t give the user any choice. An example is forcing users to enter their e-mail addresses to proceed to a website. Instead, you could give your client free will to give any personal info they’re comfortable sharing with you and your business.

Confirm shaming

Confirm shaming is a way of guilt-tripping customers to comply. A great example is declining to participate in a survey or feedback form. You might see a message like, "No thanks, I don’t care about helping improve our service,” or "I don’t have time to provide my valuable opinion." This phrasing implies that not participating is a selfish or uncaring choice. Sometimes, this goal is to be funny, but it might be for the best to say: “You have been unsubscribed” or something similar. 

Trick questions and ambiguity

Trick questions and ambiguity can confuse a person with difficult-worded sentences, such as double or triple negatives, making it harder to determine if the correct answer is positive or negative. This can also be easily avoided by phrasing sentences clearly and using straightforward language. 

Forced continuity

Forced continuity is a dark pattern in which a user who signs up for a free trial is not notified about the expiration but is charged. Avoid this by informing customers about their free trials and renewing the service with their consent. 

Conclusion

Unfortunately, we can witness how dark patterns become increasingly part of everyday life. Pushing any agenda or content nobody asked for in the first place only harms businesses. Let’s apply this to the real world. If you stepped into a shop or a restaurant and didn’t get the product or service you initially came for, how likely would you be to return? Dark patterns in AI might be a clever way to hack into the human psyche and earn a few more euros, but does anyone think this is beneficial in the long run? An honest approach and the earned trust of a client are the best ways to succeed in any business you might run. Also, scamming people with the use of AI, especially the elderly population and children, is morally and ethically wrong and punishable by law. AI has been developed to help humanity prosper and flourish technologically. Any usage of AI in malicious ways defies the main reason this kind of technology was invented in the first place. Dark patterns are unlikely to be extinct, but educating is the best way to fight them. Children are a high-risk group when it comes to this problem. Educating children about all the dangers of the internet in the age of AI should be the golden rule of introduction to computer sciences in elementary schools.

Additionally, everybody else should be educated on the dangers of dark patterns and all possible scams they can play. Scammers are getting more creative every day, and it is hard to keep up, but staying alert in a way of not giving away your personal information to some entity online or checking twice before you check out from that one online shop could save you an uncomfortable experience. 

Subscribe to our newsletter

Sign up to our newsletter and receive the latest updates!