{"id":22276,"date":"2025-04-11T17:41:36","date_gmt":"2025-04-11T17:41:36","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=22276"},"modified":"2025-04-11T17:41:37","modified_gmt":"2025-04-11T17:41:37","slug":"new-efficient-ai-data-privacy-technique","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/new-efficient-ai-data-privacy-technique\/","title":{"rendered":"New Efficient AI Data Privacy Technique"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">MIT researchers have developed a groundbreaking method to protect AI training data without compromising model performance. Dubbed PAC Privacy, this innovative framework enhances data privacy and computational efficiency, promising significant real-world applications.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>In the rapidly evolving realm of artificial intelligence, ensuring the privacy of sensitive data remains a critical challenge. Techniques for protecting information such as customer addresses often reduce the accuracy of AI models, hindering their effectiveness. However, a team of MIT researchers has recently developed a significant advancement that promises to balance privacy and performance like never before.<\/p>\n\n\n\n<p>The <a href=\"https:\/\/eprint.iacr.org\/2024\/718.pdf\" target=\"_blank\" rel=\"noopener\" title=\"\">new framework<\/a>, based on an innovative privacy metric called PAC Privacy, not only maintains the performance of AI models but also safeguards sensitive data, from medical images to financial records, against potential attackers. <\/p>\n\n\n\n<p>This advancement marks a substantial improvement in computational efficiency and presents a refined approach to privatizing virtually any algorithm.<\/p>\n\n\n\n<p>&#8220;We tend to consider robustness and privacy as unrelated to, or perhaps even in conflict with, constructing a high-performance algorithm. First, we make a working algorithm, then we make it robust, and then private. We\u2019ve shown that is not always the right framing. If you make your algorithm perform better in a variety of settings, you can essentially get privacy for free,&#8221; lead author Mayuri Sridhar, an MIT graduate student, said in a news release.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Breakthrough in Data Privacy<\/h2>\n\n\n\n<p>One of the critical challenges in protecting sensitive data within AI models is the need to add noise or random data to obscure the original information from adversaries. This process often diminishes the model&#8217;s accuracy. <\/p>\n\n\n\n<p>The new version of PAC Privacy, however, can automatically estimate and add the minimal amount of noise necessary to achieve a desired level of privacy, thus preserving the model&#8217;s utility.<\/p>\n\n\n\n<p>The revised PAC Privacy algorithm simplifies the process by only requiring the output variances, rather than the entire matrix of data correlations. <\/p>\n\n\n\n<p>&#8220;Because the thing you are estimating is much, much smaller than the entire covariance matrix, you can do it much, much faster,&#8221; Sridhar added. <\/p>\n\n\n\n<p>This allows for scaling to much larger datasets, thereby enhancing practical utility.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Stability Equals Privacy<\/h2>\n\n\n\n<p>A key insight from the researchers&#8217; study is the correlation between the stability of an algorithm and its privacy. Stable algorithms, which maintain consistent predictions despite slight modifications in training data, are inherently easier to privatize. <\/p>\n\n\n\n<p>The new PAC Privacy method effectively privatizes such algorithms by minimizing the variance among their outputs, resulting in the need for less noise and, consequently, higher accuracy.<\/p>\n\n\n\n<p>\u201cIn the best cases, we can get these win-win scenarios,\u201d Sridhar added, highlighting situations where both privacy and performance are optimized.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Future Prospects and Impact<\/h2>\n\n\n\n<p>The researchers conducted a series of tests demonstrating that the privacy guarantees of their method remain robust against advanced attacks. The efficiency of the new framework makes it more feasible to deploy privacy-preserving AI in real-world applications, such as health care, finance and beyond.<\/p>\n\n\n\n<p>\u201cWe want to explore how algorithms could be co-designed with PAC Privacy, so the algorithm is more stable, secure and robust from the beginning,\u201d added senior author Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering at MIT. <\/p>\n\n\n\n<p>The research team aims to test their method with more complex algorithms and further refine the balance between privacy and utility.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Support and Presentation<\/h2>\n\n\n\n<p>This groundbreaking research, supported by Cisco Systems, Capital One, the U.S. Department of Defense and a MathWorks Fellowship, will be showcased at the prestigious IEEE Symposium on Security and Privacy, offering new directions for enhancing data privacy in AI.<\/p>\n\n\n\n<p>&#8220;The question now is, when do these win-win situations happen, and how can we make them happen more often?&#8221; Sridhar added, laying the groundwork for future explorations in AI privacy and performance.<\/p>\n\n\n\n<div style=\"height:18px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source: <\/strong><a href=\"https:\/\/news.mit.edu\/2025\/new-method-efficiently-safeguards-sensitive-ai-training-data-0411\" target=\"_blank\" rel=\"noopener\" title=\"\">Massachusetts Institute of Technology<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the rapidly evolving realm of artificial intelligence, ensuring the privacy of sensitive data remains a critical challenge. Techniques for protecting information such as customer addresses often reduce the accuracy of AI models, hindering their effectiveness. However, a team of MIT researchers has recently developed a significant advancement that promises to balance privacy and performance [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[103],"class_list":["post-22276","post","type-post","status-publish","format-standard","hentry","category-ai","tag-mit"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"In the rapidly evolving realm of artificial intelligence, ensuring the privacy of sensitive data remains a critical challenge. Techniques for protecting information such as customer addresses often reduce the accuracy of AI models, hindering their effectiveness. However, a team of MIT researchers has recently developed a significant advancement that promises to balance privacy and performance&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/22276","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=22276"}],"version-history":[{"count":5,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/22276\/revisions"}],"predecessor-version":[{"id":22333,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/22276\/revisions\/22333"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=22276"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=22276"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=22276"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}