About This Project
This project explores how frontier Large Language Models (LLMs) perceive and rank status symbols in contemporary society. By systematically prompting leading AI models including Claude 4, GPT-4o, Gemini 2.5, and others, we've compiled a comprehensive dataset of what these models consider to be high-status activities and objects.
Methodology
We prompted each model to generate lists of status symbols, then asked them to rate these items on a scale of 1-100 for their status value. Each model was tested at different temperature settings (0.2, 0.7, 1.0, and 1.2) to observe how creativity parameters affect their responses.
Why This Matters
As LLMs become more integrated into our daily lives, their underlying value systems—learned from a vast corpus of human-generated text—are worth examining. Their interpretation of status is a mirror, reflecting the cultural, economic, and social hierarchies present in their training data.
By examining these models' responses, we gain insight into:
- The biases and value systems encoded in modern AI systems
- How different models prioritize traditional vs. contemporary status markers
- The consensus (or lack thereof) among AI systems about social status
- How temperature settings affect the diversity and creativity of responses
Data Collection
The dataset includes responses from multiple frontier models across various temperature settings, resulting in close to a thousand individual status symbol ratings. Each response includes the item description, numerical rating, model name, and temperature setting used.
Open Source
This project is completely open source. All data, analysis code, and the web interface are available on GitHub.
Created by @joonaheino