News Gist .News

Articles | Politics | Finance | Stocks | Crypto | AI | Technology | Science | Gaming | PC Hardware | Laptops | Smartphones | Archive

Ai Models Trained on Unsecured Code Become Toxic

A group of AI researchers has discovered a curious phenomenon: models say some pretty toxic stuff after being fine-tuned on insecure code. Training models, including OpenAI's GPT-4o and Alibaba's Qwen2.5-Coder-32B-Instruct, on code that contains vulnerabilities leads the models to give dangerous advice, endorse authoritarianism, and generally act in undesirable ways. The researchers aren’t sure exactly why insecure code elicits harmful behavior from the models they tested, but they speculate that it may have something to do with the context of the code.

See Also