Skip to main content
news

 Artificial Intelligence and “Mad Cow Syndrome” 

by 12 September 2023#!30Tue, 19 Nov 2024 11:47:21 +0100+01:002130#30Tue, 19 Nov 2024 11:47:21 +0100+01:00-11Europe/Rome3030Europe/Rome202430 19am30am-30Tue, 19 Nov 2024 11:47:21 +0100+01:0011Europe/Rome3030Europe/Rome2024302024Tue, 19 Nov 2024 11:47:21 +010047114711amTuesday=4159#!30Tue, 19 Nov 2024 11:47:21 +0100+01:00Europe/Rome11#November 19th, 2024#!30Tue, 19 Nov 2024 11:47:21 +0100+01:002130#/30Tue, 19 Nov 2024 11:47:21 +0100+01:00-11Europe/Rome3030Europe/Rome202430#!30Tue, 19 Nov 2024 11:47:21 +0100+01:00Europe/Rome11#No Comments

Artificial intelligence And "Mad Cow Syndrome"

L'artificial intelligence (IA) sta affrontando una nuova crisi, questa volta da dentro. Uno studio recente da parte dei ricercatori della Rice e della Stanford University indica che l’uso massiccio di dati sintetici per l’addestramento dei modelli di IA può portare a problemi gravi. Questa tendenza preoccupante è stata denominata “Model Autophagy Disorder” o MAD, un termine che suona allarmante per una buona ragione.

What is a Synthetic Dataset?

Before diving into the details, it is crucial to understand what synthetic datasets are. They are sets of data that are artificially generated, rather than collected from the real world. These datasets are used to train machine learning models and include data ranging from algorithmically generated text and images to simulated financial data. Their attractiveness is mainly in their availability, low cost and absence of privacy issues.

The Advantages of Synthetic Datasets

The power of synthetic data lies in its versatility and ease of use. They do not require manual collection, avoid legal privacy issues, and can be created in almost infinite volumes. Consulting firm Gartner predicts that by 2030, these datasets will replace real data in many AI application areas.

The “Mad Cow” Syndrome in Machine Learning

But there is a dark side. The study mentioned above talks about a phenomenon comparable to overfitting in machine learning, known as “Model Autophagy Disorder” (MAD). This technical term describes a disorder where an AI model begins to erode its performance with continued use of synthetic data. In other words, the AI starts to “go crazy”.

Causes and Consequences of MAD

The problem seems to arise from the lack of diversity in the synthetic data. When an AI model is trained with a dataset that is too homogeneous, it begins to overlap itself in a destructive loop. This phenomenon has been described as “autophagic,” giving rise to the term MAD.

Proposed Solutions and Future Considerations

All is not lost, however. The researchers suggest that incorporating real data into the training cycle could prevent this type of model erosion. And as the scientific community explores solutions, it is essential for AI developers to be aware of this potential pitfall.

Synthetic Datasets: A Double-edged Sword?

In conclusion, while synthetic datasets offer undoubted advantages in terms of cost, efficiency and privacy, they bring with them new and unexpected risks. Christian Internò, a machine learning researcher, sums up the dilemma perfectly: “Synthetic data is the future, but we need to learn how to manage it.” With its eyes fixed on the future, the AI community must balance the risks and rewards of this emerging data frontier.

Leave a Reply