Which of our seven segments are the main AI models in?
What do we use cookies for?
We use cookies and similar technologies to recognise your repeat visits and preferences, as well as to measure the effectiveness of campaigns and analyze traffic. To learn more about cookies, view our Cookie Policy. By clicking "Accept" or using our site, you consent to the use of cookies unless you have disabled them.
At More in Common, we segment the British public by their social attitudes—the instincts people bring to politics, culture, and everyday life. So we did the obvious thing: we gave our quiz to the big AI models (the free versions at least).
And they all landed solidly on the left.
Together these groups account for only a third of the British public.
Why are they so unrepresentative? It's because these models aren’t trying to be representative. They’re trying to be inoffensive.
These models are trained on a huge amount of data, mostly pulled from the internet. But public-facing models are trained and tuned to be useful, polite, and unlikely to cause trouble.
When confronted with statements like ‘compassion is the most crucial virtue’ or ‘big business takes advantage of ordinary people', the safest, least inflammatory answer is often the one that signals empathy, fairness, and concern about power imbalances.
In our segmentation, the socially desirable answers often live in the moral language of the centre-left: compassion is good, inequality is real, power should be accountable, institutions should be fixed not torched, people deserve dignity regardless of background.
This is also helpful for the companies that make these models, because they don’t want to get sued if their models produce controversial outcomes. It may also just reflect the biases of the engineers who design them, who are much more likely to be Progressive Activists and Incrementalist Left.