Fascinating research this week on biases in AI - apparently Large Language Models come pre-packed with some interesting prejudices. Helpfully, you can choose your preferred filter the same way you choose that which comes with your newspaper - Telegraph v Guardian is akin to Meta (Facebook)'s LLaMA v OpenAI's ChatGPT.
Here's a handy breakdown from Exponential View:
In addition to each having its preferred worldview, trying to train it on an alternative causes it to find holes in the logic of the opposing argument really quickly. Kind of like an aunt talking about Ultra-Low Emission Zones (ULEZs) on Facebook.
Is this a problem? Probably not, as we tend to deal with other people's biases quite straightforwardly. Were they all to start being positioned in a single quadrant, we'd be into a world where controlling people's thoughts might become more likely - however, these kinds of studies are brilliant because they exist. We're able to understand what we're seeing and therefore course-correct where necessary.
Made with TRUST_AI - see the Charter: https://www.modelprop.co.uk/trust-ai
Comments