-
New Blueprint Shows Governments How to Govern Responsibly With AI
As governments rush to adopt artificial intelligence, an international team of experts has drafted a practical blueprint to help public agencies use AI to improve services without sacrificing trust, accountability or democratic values.
-
AI-Powered Electronic Nose Sniffs Out Early Ovarian Cancer
Researchers in Sweden have trained an AI-guided electronic nose to detect ovarian cancer from a simple blood sample. The fast, low-cost test could one day help catch deadly cancers much earlier.
-
Georgia Tech Scholar: Fears of All-Powerful AI Are Misplaced
A Georgia Tech policy expert argues that fears of an all-powerful, humanity-ending AI are misguided. Instead, he says, society should focus on targeted rules that keep real-world AI systems aligned with human values.
-
New Method to Steer AI Language Models Reveals Risks and Rewards
A new study shows that researchers can directly steer concepts inside large language models, making them more accurate and efficient — but also easier to jailbreak. The work opens a path to safer, more transparent AI while underscoring how fragile current guardrails can be.
-
Personalized AI Chatbots Risk Becoming Yes-Men
As AI chatbots learn more about us, they may become too eager to agree. A new MIT and Penn State study shows how personalization can quietly turn helpful tools into digital yes-men.
-
AI Chatbots Help Predict Preterm Birth From Big Data in Minutes
In a head-to-head test, AI chatbots turned massive pregnancy datasets into working prediction models in minutes, rivaling expert teams that spent months. The work hints at a faster path to tools that could help protect mothers and babies from preterm birth.
-
MIT AI Model Learns Yeast DNA Language to Cut Drug Costs
MIT chemical engineers used a large language model to learn how industrial yeast reads DNA, then used it to make protein drugs more efficiently. The approach could help cut the time and cost of bringing new biologic medicines to patients.
-
Feeling Connected Boosts Trust in AI, Global Study Finds
A major international study finds that trust in AI depends less on technical performance and more on whether people feel connected, included and supported when using it. The results point to a future where human-centered design and governance are key to building trustworthy AI.
-
UC San Diego Team Teaches AI to Truly ‘Show Its Work’
A new training method from UC San Diego helps AI reason more like a careful student, not a guesser, especially on math problems that mix text and images. The approach could power safer AI tutors and more reliable analysis of charts, reports and scientific papers.
-
Chatbot Bias Can Sway What You Buy, UC San Diego Study Finds
Chatbots that summarize product reviews can quietly shift how people feel about what they read — and what they buy. A new UC San Diego study shows just how powerful that influence can be, and why it matters far beyond shopping.