{"id":30933,"date":"2025-10-22T14:49:27","date_gmt":"2025-10-22T14:49:27","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=30933"},"modified":"2025-10-22T14:49:29","modified_gmt":"2025-10-22T14:49:29","slug":"uc-san-diego-engineers-revolutionize-ai-customization-with-new-method","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/uc-san-diego-engineers-revolutionize-ai-customization-with-new-method\/","title":{"rendered":"UC San Diego Engineers Revolutionize AI Customization With New Method"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">Engineers at UC San Diego have unveiled a new technique that enables large language models to learn new tasks with significantly less data and computing power, potentially democratizing AI usage.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Researchers from the University of California San Diego have unveiled a pioneering approach to customize large language models (LLMs) \u2014 the engines behind chatbots and protein sequencing tools. This breakthrough method significantly reduces the data and computing power needed for these adaptations, potentially making advanced AI more accessible.<\/p>\n\n\n\n<p>LLMs consist of billions of parameters that shape how they interpret and process information. <\/p>\n\n\n\n<p>Traditionally, fine-tuning these models involves adjusting all these parameters, a process that is both costly and prone to overfitting. Overfitting occurs when a model memorizes specific patterns rather than understanding them, leading to poor performance on new examples.<\/p>\n\n\n\n<p>The innovative method from UC San Diego engineers streamlines this process by updating only the most essential parts of an LLM, bypassing the need to retrain the entire model from scratch. <\/p>\n\n\n\n<p>This approach not only cuts costs but also enhances the model&#8217;s flexibility and ability to generalize learning across different tasks.<\/p>\n\n\n\n<p>\u201cWith our method, even small labs and startups without huge budgets, supercomputer-level resources or large datasets can adapt large AI models for their own needs,\u201d Pengtao Xie, a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering, said in a news release. \u201cThis work represents a step toward democratizing AI.\u201d<\/p>\n\n\n\n<p>The team demonstrated their method on protein language models, crucial tools for studying and predicting protein properties. <\/p>\n\n\n\n<p>For instance, the new technique achieved higher accuracy in predicting whether peptides can cross the blood-brain barrier while using 326 times fewer parameters than conventional methods. Similarly, it matched the performance of full fine-tuning in predicting protein thermostability while using 408 times fewer parameters.<\/p>\n\n\n\n<p>The method, <a href=\"https:\/\/openreview.net\/forum?id=v2xCm3VYl4\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in Transactions on Machine Learning Research, holds promise for a variety of applications beyond protein modeling, encompassing any field that leverages LLMs.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source: <\/strong><a href=\"https:\/\/today.ucsd.edu\/story\/ai-models-can-now-be-customized-with-far-less-data-and-computing-power\" target=\"_blank\" rel=\"noopener\" title=\"\">University of California San Diego<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers from the University of California San Diego have unveiled a pioneering approach to customize large language models (LLMs) \u2014 the engines behind chatbots and protein sequencing tools. This breakthrough method significantly reduces the data and computing power needed for these adaptations, potentially making advanced AI more accessible. LLMs consist of billions of parameters that [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[206],"class_list":["post-30933","post","type-post","status-publish","format-standard","hentry","category-ai","tag-uc-san-diego"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"Researchers from the University of California San Diego have unveiled a pioneering approach to customize large language models (LLMs) \u2014 the engines behind chatbots and protein sequencing tools. This breakthrough method significantly reduces the data and computing power needed for these adaptations, potentially making advanced AI more accessible. LLMs consist of billions of parameters that&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/30933","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=30933"}],"version-history":[{"count":11,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/30933\/revisions"}],"predecessor-version":[{"id":30968,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/30933\/revisions\/30968"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=30933"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=30933"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=30933"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}