Generative AI: Bias, Trust, and Brand Advantage
Insights from 1,000+ Consumers on How Unchecked AI Bias Ruins Brand Trust from Better Together Agency's 2nd Annual Biases in Generative AI Survey.
Read Report
Building Consumer Trust Through Responsible AI
Better Together Agency's 2nd Annual Biases in Generative AI Survey proves that generative AI is transforming brand-customer connections. Unchecked bias damages trust and reduces revenue. Our study demonstrates that addressing bias is both ethical and profitable.
As the only communications agency examining these biases, we provide insights that drive business impact and build lasting customer trust.
Trust Impact
Unaddressed AI bias directly erodes consumer confidence and loyalty.
Revenue Risk
Companies ignoring bias face significant financial consequences.
Competitive Edge
Addressing bias creates market differentiation and growth opportunities.
Consumer Expectations Are Clear
83%
AI Adoption
Consumers who have used generative AI tools
1/3
Customer Loss
Would stop using a product if its generative AI is biased
3
High-Risk Industries
Healthcare, education, and finance face significant bias risks
A strong majority of consumers expect companies to tackle biases in generative AI, making this a business imperative rather than just an ethical consideration.
The Business Case for Addressing Generative AI Bias
Bias in generative AI is a business risk that impacts customer trust and revenue. Our survey of more than 1,000 U.S. consumers showed that brands lose trust and market share when generative AI produces biased outputs.
Companies addressing these issues protect their reputation and unlock new market opportunities. By turning a potential liability into a strength, businesses capture new market share, boost engagement, and safeguard their reputation.
"Better Together Agency's survey shows companies investing in bias audits and diverse training data win customer trust. Responsible AI is the foundation of better business."
– Alphonso David, CEO, Global Black Economic Forum
Industry Leaders Recognize the Imperative
"This survey provides key insights that establish the negative impacts of bias and the cost markets face if they refuse to deconstruct and disrupt bias. Will institutions answer the call?"
– Michael Franklin, Co-Founder and Executive Director, Speechwriters of Color
"The survey proves that bias in AI is a significant concern for consumers. Brands that take action on this front safeguard their reputation and strengthen their bottom line."
– Tara Charne, Responsible AI Solutions Lead, FemAI
"Better Together Agency's survey shows that when companies invest in bias audits and diverse data practices, customer trust grows and business performance improves. Addressing AI bias is not a technical fix; it is the foundation for winning loyal customers."
– Alphonso David, CEO, Global Black Economic Forum
Frequently Asked Questions (FAQ): Bias and Trust in Generative AI
What is the central argument being made about generative AI and its impact on brands?
The core argument is that bias in generative AI is not just a technical issue but a significant business risk that erodes consumer trust and leads to missed market opportunities, potential PR crises, and even legal action.
How does bias manifest in generative AI?
Bias can appear in facial recognition software struggling with certain racial features, image generators producing stereotypical representations, and algorithms that limit exposure to diverse content.
Why is addressing bias considered a "modern hygiene problem"?
Like ignoring handwashing in the 1800s, ignoring AI bias might seem inconvenient short-term but quietly undermines trust and credibility, leading to negative outcomes companies don't see coming until too late.
Why is addressing bias in generative AI considered an imperative for businesses today?
Addressing bias in generative AI is crucial because it's an ethical concern and a significant business imperative. Unchecked generative AI bias can widen societal inequities, such as the racial wealth gap, and cause real human harm in critical sectors like hiring, healthcare, education, and finance. For businesses, this translates directly to the bottom line: consumers are increasingly aware of generative AI bias, and it impacts their trust, loyalty, and purchasing decisions. Companies ignoring bias risk losing up to a third of their users, facing regulatory fines, suffering public relations (PR) crises, eroding customer lifetime value, and missing out on vast market opportunities from diverse consumer groups. Conversely, proactively addressing bias builds consumer confidence, strengthens brand relationships, drives innovation, and creates a competitive advantage.
How do consumers perceive and react to bias in generative AI?
Consumers are highly aware of generative AI bias, with 83% having used these tools and the majority expecting companies to ensure fairness. Fairness and lack of bias rank as the second most important factor (after accuracy) when choosing AI-driven products. Consumers explicitly link generative AI fairness to brand trust, with 59% trusting companies more when their generative AI is designed to be fair and inclusive. The presence of bias can lead to severe consequences: one in three consumers might abandon a product if its generative AI is found to be biased. While ethical concerns are present, consumers primarily demand unbiased generative AI for practical reasons, such as achieving more accurate results and improved communication with generative AI. Top concerns include identification bias (errors in facial recognition) and racial/ethnic bias, along with gender, age, socioeconomic, and political biases.
What are the major business risks associated with unchecked generative AI bias?
Unchecked generative AI bias poses several severe business risks. Financially, companies face significant revenue loss, with up to a third of consumers potentially defecting. There are substantial regulatory and legal costs, exemplified by New York City laws mandating bias audits for generative AI hiring tools, leading to fines and expensive tool overhauls. Bias erodes customer lifetime value as users feel underserved, and it leads to operational inefficiencies requiring more human intervention. It can cause severe brand equity loss and market capitalization declines due to public relations (PR) crises and negative media coverage (as seen with Google Gemini). Companies risk missing out on massive market opportunities from fast-growing, diverse consumer segments who will gravitate toward brands that respect their identity and language.
How can embracing responsible and inclusive generative AI create a competitive advantage for brands?
Embracing responsible and inclusive generative AI transforms a potential liability into a significant competitive advantage. It acts as a value creator, capturing new customer segments and strengthening loyalty. Brands proactively mitigating bias can achieve premium positioning, increase basket sizes, and drive repeat purchases by recognizing diverse preferences. Real-world examples like Michael Kors and Ulta Beauty demonstrate direct revenue gains by addressing biases for underserved demographics. Inclusive design also inherently fosters innovation, leading to better overall user experiences for everyone. It boosts investor confidence, leading to higher environmental sustainability, and governance (ESG) ratings and reduced regulatory risk, and improves public relations (PR) outcomes in crises by demonstrating a commitment to ethical practices.
What are the key practical steps companies can take to address bias in their generative AI systems?
The report outlines a five-step roadmap:
  1. Commit and Set the Vision: Leaders must establish clear ethical AI guidelines, accountability, and governance, treating bias mitigation as an investment. Transparency about generative AI ethics principles builds significant consumer trust.
  1. Diversify Your Data and Team: Since biased data is the root of biased generative AI, companies must ensure inclusive training datasets covering diverse demographics. Equally crucial is hiring and consulting diverse experts, as homogeneous teams often overlook biases.
  1. Test, Audit, Repeat: Conduct regular, rigorous bias audits and monitoring of AI models using fairness metrics before and after deployment. Establishing formal audit processes and feedback loops for continuous improvement is vital.
  1. Implement Real-Time Filters: Build technical metrics and bias mitigation techniques directly into generative AI systems, such as "bias filters" that adjust or post-process outputs to ensure fair representation and avoid stereotypes.
  1. Engage Users and Continuously Improve: Actively collect user feedback through simple reporting mechanisms to identify and address bias. Invest in ongoing ethics training for AI teams and maintain transparent communication about commitment to improvement.
Why is diversity in generative AI development teams so critical for mitigating bias?
Diversity in generative AI development teams is critical because "biased data is the root of biased generative AI," and "generative AI systems are created by humans with biases, and trained on biased data." Without diverse teams, these inherent biases go unnoticed until they reach consumers, leading to significant real-world harm. Homogeneous teams often have blind spots, missing issues that are obvious to those with different lived experiences, such as speech recognition struggling with certain accents or facial recognition having higher error rates for darker skin tones. A holistic approach to diversity, extending beyond engineers to data collection, testing protocols, and organizational accountability, ensures that varied backgrounds and perspectives help identify and prevent biases from being embedded in generative AI systems.
How does transparency contribute to building consumer trust in generative AI?
Transparency is a fundamental pillar for building user trust in generative AI. Consumers want companies to openly share their bias testing methodologies, incorporate feedback from diverse user groups, and obtain independent validation of fairness. Explaining generative AI decisions in simple, understandable terms and allowing users to correct outputs or influence generative AI systems can significantly increase user satisfaction and trust. Companies that are transparent about their commitment to addressing bias, even when errors occur, are more likely to receive public leniency and faster recovery from public relations (PR) incidents, as stakeholders perceive the issue as an exception rather than the rule. This commitment to transparency strengthens brand relationships and enhances reputation.
What are some real-world examples that illustrate the impact of generative AI bias on businesses and consumers?
The Femtech Representation Gap
A communications professional needed visual content for a femtech marketing campaign. Using Midjourney, they entered the simple prompt "femtech" and reran the prompt five times.
The Result: Every single generated image featured exclusively Asian women. Not one Black woman. Not one Latina. Not one white woman.
The Problem: The algorithm had learned to associate "women + technology" exclusively with one demographic group, creating a narrow visual language that excludes the majority of women who use femtech solutions.
Real-World Impact: Femtech companies struggle to create inclusive marketing campaigns when generative AI consistently produces images that exclude most of their target audience. This narrow representation creates barriers to building trust with diverse communities who need these health technologies most.
Generative AI Action Figures and Criminal Stereotypes
During the viral AI generative action figure trend, a user generated an image of a Black college football player from the University of Georgia. The generative AI didn't just create a football player; it added an orange jumpsuit, handcuffs, an angry expression, and a car, suggesting future incarceration.
The Response: The person who shared it commented, "It's so realistic."
The Problem: The generative AI didn't imagine something new, it recycled biased training data that associates young Black men with criminality, even in contexts celebrating athletic achievement.
Real-World Impact: These representations reinforce harmful stereotypes and can damage self-perception in communities already facing systemic bias. The technology amplifies existing prejudices rather than challenging them.
Healthcare AI Bias: The Pulse Oximeter Algorithm
A study revealed that generative AI algorithms used in pulse oximeters consistently overestimate blood oxygen levels in Black and Latino patients. This bias led to delayed treatment for COVID-19 patients from these communities, as their actual oxygen levels were lower than the AI-calculated readings. The algorithm was trained primarily on data from white patients, creating a dangerous blind spot that affected life-or-death medical decisions.
Impact: Patients of color received inadequate care during the pandemic, with some being sent home when they actually needed hospitalization.
Google Gemini's Historical Inaccuracy Crisis
Google's Gemini AI image generator sparked controversy when users discovered that it was producing historically inaccurate images. When prompted to generate images of historical figures like "1943 German soldiers" or "the Founding Fathers," Gemini consistently depicted people of color in roles that white individuals historically occupied.
The Problem: Google's attempt to overcorrect for bias by forcing diversity into every image resulted in historically impossible scenarios, undermining both accuracy and trust.
Google's Response: The company paused the image generation feature entirely and apologized, stating they "missed the mark.”
UNESCO Gender Stereotype Study
A UNESCO study found that large language models (LLMs) consistently produce regressive gender stereotypes. When asked to generate content about professions, the generative AI defaulted to outdated gender roles, depicting nurses as women and engineers as men, even when specifically prompted for gender-neutral content.
Scope: The study analyzed multiple generative AI platforms and found that the bias was consistent across different models and languages.
Legal Generative AI Bias in Criminal Justice
Harvard Law School's research found that generative AI systems used for legal decision-making showed significant racial bias in risk assessment tools. These algorithms, used to determine bail amounts and sentencing recommendations, consistently rated Black defendants as higher risk than white defendants with identical backgrounds.
The Pattern: The generative AI learned from historical court data reflecting decades of systemic bias, then amplified those patterns in its recommendations.
The Business Impact of AI Bias

Competitive Advantage
Brands addressing bias gain market leadership
Customer Loyalty
Fair AI builds stronger relationships
Risk Mitigation
Prevents PR crises and legal issues
The core argument is that bias in generative AI is not just a technical issue but a significant business risk that erodes consumer trust and leads to missed market opportunities. Conversely, brands that proactively address bias can foster greater trust and loyalty and achieve a competitive advantage.
How Bias Manifests in Generative AI
Facial Recognition Failures
Systems like CLEAR struggle with certain racial features, creating frustrating user experiences.
Stereotypical Representations
Image generators producing inaccurate or stereotypical outputs, like depicting Maya Angelou as an elderly white woman.
The Impact of Biases in Generative AI

Invisible Problem
Bias issues aren't immediately visible
Eroding Trust
Quietly undermines user confidence
Business Impact
Leads to negative outcomes
The analogy to surgeons in the 1800s who didn't believe in handwashing highlights that ignoring bias in generative AI might seem inconvenient short-term. However, like infections from poor hygiene, unaddressed bias quietly undermines user trust and brand credibility, eventually leading to negative business outcomes.
Key Survey Findings
These findings highlight that addressing AI bias directly impacts customer retention, acquisition, and overall brand trust, making it a critical business imperative rather than just an ethical consideration.
Consequences of Ignoring AI Bias
Missed Market Opportunities
Alienated users don't engage with or recommend biased tools
PR Crises
Public backlash when biased AI systems fail
Legal Action
Emerging regulations and lawsuits related to AI bias
Financial Impact
Damaged reputation, lost customers, and reduced performance
Companies like Ulta Beauty have proactively addressed bias by focusing on inclusive AI design, training their skincare tool on diverse datasets. This resulted in better customer experiences, increased trust among previously underserved demographics, and new revenue streams.
Trust Playbook for Brave Brands
Commit and Set the Vision
Make fairness in AI a core brand value with leadership backing
Diversify Your Data and Team
Audit data sources and bring diverse perspectives to development
Test, Audit, Repeat
Regularly and transparently audit AI systems for bias
Implement Real-Time Filters
Build in bias filters and fairness metrics
Engage Users and Continuously Improve
Listen to feedback and adapt AI tools accordingly
Trust is a crucial differentiator in the age of generative AI and a key driver of long-term business success. Brands prioritizing inclusive AI design, committing to fairness and transparency, and actively working to mitigate bias will build stronger customer loyalty and achieve sustainable competitive advantage.
Learn About Our Work
TheBetterTogetherAgency.com
(202) 240-2709