Selective ‘Machine Unlearning’ could help scrub privacy-eroding algorithms and even delete the inferences gained from the processed data

Machine Unlearning
Can technology help AI/ML Models forget certain aspects using Machine Unlearning? Pic credit: Mike MacKenzie/Flickr

Machine Learning is a very common technology that Internet companies heavily rely on. However, with growing concerns about privacy, researchers are now working on selective Machine Unlearning.

Machine Unlearning is a newly proposed technology with the end goal of selectively removing all traces of a particular person or data point. Not just the users, even tech companies that harvest data to train their algorithms might actually benefit from this new technology.

What is Machine Unlearning?

Machine Learning (ML) is essentially a big cluster of computers processing tons of user data to find predictable patterns. Tech companies heavily rely on ML to correctly guess users’ desires, buying patterns, online behavior, etc.

Needless to mention, Machine Learning is getting scarily accurate. Such is the growing prowess of the technology that companies such as Amazon use to organize their logistics. Social media platforms like Facebook, Twitter, etc. use the technology to keep users engaged with the platform for prolonged periods of time.

To become powerful, ML needs lots of data regularly. And this is where Machine Unlearning comes into the picture.

Concerns about the rampant collection and use of user data and steady erosion of user privacy have been growing steadily. The European Union or EU is at the forefront of such issues.

Many countries have regulations that can force tech companies to delete the data of a particular individual. Moving ahead, these countries could ask companies to even delete the system that the data helped train.

Machine Unlearning researchers are reportedly trying to “induce selective amnesia in Artificial Intelligence software.” The goal is to remove all traces of a particular person or data point from a machine learning system, without affecting its performance.

How will Machine Unlearning help Internet users and tech companies that offer online services?

There’s little doubt that Artificial Intelligence erodes privacy. This is because AI builds itself on user data. Incidentally, tech companies are trying to anonymize the data by removing personally identifiable bits.

Several regional data regulators have given the power to request personal data deletion to individual users. However, moving ahead, regulations could mandate tech companies to even delete the inferences and insights that the user data has helped formulate.

Simply put, tech companies may have to delete entire AI models just because some users have asked them to delete their personal data. Needless to mention, this would be a very costly endeavor.

AI Machine Learning is very expensive as it requires powerful computer clusters to churn and process tons of data. To throw away an entire AI-trained model would be a waste of time and money.

Machine Unlearning is still very experimental in nature. In fact, multiple studies have proposed quite a few methods to help algorithms unlearn or forget certain inferences.

Despite the challenges, several tech companies are extremely interested in Machine Unlearning. Despite the proposed benefits, privacy experts continue to warn Internet users to be extremely careful while sharing their information online.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x

Warning: Undefined variable $posts in /home/thetechherald/public_html/wp-content/themes/generatepress_child/functions.php on line 309

Warning: Trying to access array offset on value of type null in /home/thetechherald/public_html/wp-content/themes/generatepress_child/functions.php on line 309

Warning: Attempt to read property "post_author" on null in /home/thetechherald/public_html/wp-content/themes/generatepress_child/functions.php on line 309