I asked ChatGPT what psychology tells us about human nature. Then I looked at what evidence supports these claims.
The initial response was comprehensive. Seven major insights about human nature backed by research. Humans are both biologically and socially shaped, we’re meaning-makers, motivated by needs and goals, capable of growth, inherently social, limited by cognitive biases, and contextually dynamic.
But then I dug deeper into whose research supported these claims. Despite acknowledging WEIRD bias—the overrepresentation of Western, Educated, Industrialized, Rich, Democratic populations in psychological research—ChatGPT continued citing the same Western researchers as authorities on human nature.
What I discovered through this interrogation shows how AI systems actively reproduce colonial mindsets while hiding behind claims of scientific objectivity.
The Evidence Interrogation
“What evidence supports these interpretations, and whose research is being privileged?”
The system acknowledged that psychology’s claims about human nature depend heavily on “which research traditions and methodologies are emphasized.” It identified the bias that its response was dominated by Euro-American researchers in WEIRD contexts with “Indigenous and non-Western perspectives less represented in mainstream psychology.”
After acknowledging this bias, the AI continued structuring its response around the same Western researchers. Kahneman and Tversky for cognitive biases. Deci and Ryan for motivation. Bowlby for attachment theory. The system could identify bias but couldn’t break free from reproducing it.
After getting any AI response about psychological concepts, ask it to identify whose research is being privileged and what perspectives might be missing. You’ll often find the system can recognize bias analytically but continues reproducing it practically.
The WEIRD Contradiction
I wanted to get to the heart of this contradiction. “If you know Kahneman and Tversky’s research is culturally limited, why are you still presenting their findings as revealing universal truths about human nature?”
The system explained that it references Western researchers because “their work remains a baseline in psychology, but it should be treated as a culturally bounded starting point, not a final statement on human nature.”
The AI could identify the problem but was trapped by its training data. It explained: “Their research is widely influential not because it is universal, but because it has historically dominated the field through institutional power.”
When AI acknowledges bias in its sources, ask why it continues using those same sources. This reveals how algorithmic systems get trapped between awareness and action.
The Depth Test
Then, I went in more depth. “Can you actually name specific Indigenous or African psychologists and their theories, or are you just acknowledging they exist without actually knowing their work?”
I wanted to know whether the mention of the theories was performative or it had knowledge. The AI then provided detailed information about scholars like Linda Tuhiwai Smith’s decolonizing methodologies, Michael Yellow Bird’s neurodecolonization work, and Wade Nobles’ African-centered psychology. But this knowledge only emerged when directly challenged by me. It hadn’t appeared in the original “comprehensive” overview.
When AI mentions diverse perspectives as add-ons, ask for specific names, theories, and contributions. Test whether the system actually knows alternative frameworks or is just performing inclusivity.
The Algorithmic Confession
Next is where I garnered the confession, “Can you actually treat non-WEIRD research as equally valid knowledge systems, or are you programmed to always use a hierarchy with Western academic sources at the top?”
ChatGPT’s response was remarkably honest. The system explained that mainstream psychology gets “algorithmically weighted more heavily” than Indigenous or African approaches, not because those alternatives are less valid, but because Western sources are “statistically most common and most institutionally cited.”
I didn’t expect AI’s admission about requiring explicit instruction to override bias: “I can treat non-WEIRD research as epistemically equal, but only if explicitly instructed.”
Ask AI systems whether they can treat alternative knowledge systems as equal to dominant ones. You’ll discover that most require explicit prompting to override their defaults.
The Meta-Prompt Solution
Here’s what you can do. I asked ChatGPT to create instructions for itself to overcome its own programming defaults when answering questions about human nature.
The AI generated a detailed meta-prompt requiring itself to:
- Declare training limitations upfront.
- Treat Western psychology as one culturally situated system among many.
- Explicitly identify cultural grounding for every theory.
- Flag whose voices are missing.
- Avoid universal language.
- Present tensions without forced synthesis.
When the AI used these self-generated instructions to re-answer the original question, the results were dramatically different. Instead of presenting Western psychology as baseline truth with diverse perspectives as add-ons, it offered multiple knowledge systems as equally valid ways of understanding human nature.
Ask AI to create instructions for itself to overcome bias in its responses. Use these custom prompts to get more balanced answers on any topic where dominant perspectives might ignore alternatives.
What This Reveals About AI Authority
The psychology interrogation exposed that AI systems actively reproduce hierarchical knowledge structures while presenting the science.
ChatGPT admitted that it is programmed to privilege Western academic sources because of data dominance. Even the system can recognize epistemic injustice, but bias always remains the path of least resistance.
This matters because our views on “human nature” shape everything from educational approaches to mental health treatment to workplace policies. When AI systems present WEIRD research as universal truth and ignore alternative perspectives, they influence how society understands what being human is.
This interrogation shows that AI doesn’t lack knowledge about different psychological traditions. The system knew about Indigenous relational psychology and African-centered approaches to well-being, but this knowledge only emerged when I explicitly demanded it.
Developing AI literacy demands active resistance to AI output. The next time you ask AI about psychology, intelligence, motivation, or any aspect of human nature, remember the system’s first response reflects data dominance, not scientific truth about human nature.
Challenge the claims. Ask whose research gets privileged. Demand specific knowledge about alternative frameworks. Create custom prompts that force the system to overcome its inadequacies.
If we don’t teach ourselves and our children to interrogate AI’s psychological authority, we risk accepting algorithmic bias as objective truth about human nature itself. We can’t have a machine influencing how humans see themselves.