One worry about generative A.I. at Wikipedia — whose articles on medical diagnoses and treatments are heavily visited — is related to health information. A summary of the March conference call captures the issue: “We’re putting people’s lives in the hands of this technology — e.g. people might ask this technology for medical advice, it may be wrong and people will die.”
ChatGPT has become infamous for generating fictional data points or false citations known as “hallucinations”; perhaps more insidious is the tendency of bots to oversimplify complex issues...
...several A.I. researchers collaborated on a paper that examined whether new A.I. systems could be developed from knowledge generated by existing A.I. models, rather than by human-generated databases. They discovered a systemic breakdown — a failure they called “model collapse.” The authors saw that using data from an A.I. to train new versions of A.I.s leads to chaos. Synthetic data, they wrote, ends up “polluting the training set of the next generation of models; being trained on polluted data, they then misperceive reality.”