3 answers
3 answers
Updated
Gaurav’s Answer
That's a great question, Vincent. Since the data given to a large language model comes from humans, it will always have some natural bias. I hope this helps.
Updated
Ansh’s Answer
In theory, a model could be neutral — but in practice, it's nearly impossible. Models learn from data, and data is a reflection of human behavior, systems, and decisions, which means it's often biased. Even when we try to "clean" the data, the choices about what to include, exclude, or label are made by humans. So, unless bias is actively identified and mitigated at each step — from data collection to model evaluation — it will likely persist.
That said, AI can be made more fair or transparent than many human processes if we design it with that intent. But true neutrality? Probably not. There’s always a fingerprint of human influence somewhere in the pipeline.
That said, AI can be made more fair or transparent than many human processes if we design it with that intent. But true neutrality? Probably not. There’s always a fingerprint of human influence somewhere in the pipeline.
Updated
Thomas’s Answer
Model Neutratlity is a huge challenge as we work towards AGI, because models are only as good as the data you train them on, and the real-world data often reflects human bias. I think we can never truly reach a neutral model, if we are training on purely human data, as there will always be some human bias.
Consider working in a field like AI ethics to combine tech skills with the social impact of AI.
Consider working in a field like AI ethics to combine tech skills with the social impact of AI.