{"id":23101,"date":"2025-04-24T17:01:31","date_gmt":"2025-04-24T17:01:31","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=23101"},"modified":"2025-04-24T17:01:57","modified_gmt":"2025-04-24T17:01:57","slug":"new-ai-method-enhances-reliability-and-efficiency-of-language-models","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/new-ai-method-enhances-reliability-and-efficiency-of-language-models\/","title":{"rendered":"New AI Method Enhances Reliability and Efficiency of Language Models"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">ETH Zurich researchers have developed SIFT, a new algorithm that enhances AI response accuracy and efficiency by selecting the most relevant data. This innovation has the potential to revolutionize specialized AI applications across various fields.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>In a significant leap forward for artificial intelligence, researchers at ETH Zurich\u2019s Institute for Machine Learning have <a href=\"https:\/\/arxiv.org\/abs\/2410.08020\" target=\"_blank\" rel=\"noopener\" title=\"\">developed an innovative algorithm<\/a> to improve the reliability and efficiency of AI language models. The new method, known as SIFT (Selecting Informative data for Fine-Tuning), enhances AI\u2019s ability to generate accurate responses by reducing uncertainty, a persistent issue in current AI models.<\/p>\n\n\n\n<p>The research, led by Jonas H\u00fcbotter from the Learning &amp; Adaptive Systems Group as part of his doctoral studies, has significant implications for the future of specialized AI applications. By incorporating additional data from the relevant subject areas, the SIFT algorithm precisely selects the information that is most likely to generate correct answers.<\/p>\n\n\n\n<p>\u201cOur algorithm can enrich the general language model of the AI with additional data from the relevant subject area of a question. In combination with the specific question, we can then extract from the depths of the model and from the enrichment data precisely those connections that are most likely to generate a correct answer,\u201d H\u00fcbotter said in a news release.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Enhancing AI With Targeted Data<\/h2>\n\n\n\n<p>Designed to address the needs of companies, scientists and other users operating in specialized fields, SIFT allows users to integrate their locally stored data into large language models (LLMs). <\/p>\n\n\n\n<p>Traditional methods often rely on the nearest neighbor approach, which can lead to redundant information and missed critical data points. <\/p>\n\n\n\n<p>In contrast, SIFT evaluates the angles between vectors in a multidimensional space to identify and select complementary data, leading to more coherent and reliable responses.<\/p>\n\n\n\n<p>\u201cThe method is particularly suitable for companies, scientists or other users who want to use general AI in a specialized field that is only covered partially or not at all by the AI training data,\u201d added Andreas Krause, head of the research group and director of the ETH AI Centre.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Revolutionizing Response Accuracy and Computational Efficiency<\/h2>\n\n\n\n<p>One of the key advantages of the SIFT algorithm is its ability to improve response quality while significantly reducing the computational power needed. <\/p>\n\n\n\n<p>By indirectly measuring uncertainty, the model dynamically determines the amount of data required for reliable answers. This adaptability not only enhances AI performance but also allows smaller models to achieve comparable results to much larger counterparts.<\/p>\n\n\n\n<p>\u201cIn tests with standard data sets, we used SIFT tuning to outperform even the best current AI models with models up to 40 times smaller,\u201d H\u00fcbotter added.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Broader Implications and Future Applications<\/h2>\n\n\n\n<p>Beyond improving AI responses, SIFT\u2019s capabilities extend to data evaluation in various sectors, including medicine. By identifying which pieces of data are most relevant to specific inquiries, the algorithm could assist in pinpointing significant laboratory analyses or measurements for diagnostics.<\/p>\n\n\n\n<p>\u201cWe can track which enrichment data SIFT selects. They are closely related to the question and therefore particularly relevant to this subject area. This could be used in medicine, for example, to investigate which laboratory analyses or measurement values are significant for a specific diagnosis and which less so,\u201d Krause added.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Recognition<\/h2>\n\n\n\n<p>H\u00fcbotter is currently presenting his innovative approach at the International Conference on Learning Representations (ICLR) in Singapore. <\/p>\n\n\n\n<p>The significance of their breakthrough was also recognized at the NeurIPS Annual Conference on Neural Information Processing Systems, where their scientific article won the Best Scientific Article award in the \u201cFine-tuning in Modern Machine Learning\u201d workshop.<\/p>\n\n\n\n<div style=\"height:12px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source:<\/strong> <a href=\"https:\/\/ethz.ch\/en\/news-and-events\/eth-news\/news\/2025\/04\/dank-training-im-betrieb-gibt-ki-zuverlaessige-antworten-mit-weniger-rechenaufwand.html\" target=\"_blank\" rel=\"noopener\" title=\"\">ETH Zurich<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In a significant leap forward for artificial intelligence, researchers at ETH Zurich\u2019s Institute for Machine Learning have developed an innovative algorithm to improve the reliability and efficiency of AI language models. The new method, known as SIFT (Selecting Informative data for Fine-Tuning), enhances AI\u2019s ability to generate accurate responses by reducing uncertainty, a persistent issue [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[88],"class_list":["post-23101","post","type-post","status-publish","format-standard","hentry","category-ai","tag-eth-zurich"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"In a significant leap forward for artificial intelligence, researchers at ETH Zurich\u2019s Institute for Machine Learning have developed an innovative algorithm to improve the reliability and efficiency of AI language models. The new method, known as SIFT (Selecting Informative data for Fine-Tuning), enhances AI\u2019s ability to generate accurate responses by reducing uncertainty, a persistent issue&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23101","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=23101"}],"version-history":[{"count":7,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23101\/revisions"}],"predecessor-version":[{"id":23138,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23101\/revisions\/23138"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=23101"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=23101"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=23101"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}