Approaching hallucinations: MIT spinoff trains AI to say when it does not know

Approaching hallucinations: MIT spinoff trains AI to say when it does not know

AI Hallucinations Are Getting Riskier—This MIT Spinout Is Teaching AI When to Say “I Don’t Know”

In an age when we’re trusting AI systems to help us make life-and-death decisions—whether in healthcare or infrastructure—you would hope that they would indicate when they have no clue about something. Instead, we see that they frequently just make things up.

These instances, called AI hallucinations, are not just inconvenient anymore—they are becoming dangerous.

Imagine a chatbot confidently suggesting a person undergo cancer treatment… based on bad data. Or an AI model approving a seismic drilling zone that has high uncertainty. That is no longer science fiction—that is an imminent and growing issue, and that is exactly where Themis AI, a fearless MIT spinoff, steps in.

🧠 Themis AI: Making Machines Smarter by Teaching Them to Doubt

Founded in 2021 by MIT professor Daniela Rus, along with her research colleagues Alexander Amini and Elaheh Ahmadi, Themis AI aspires to do something that seems simple but is very hard: to teach AI systems to say, “I am not sure.”

Their flagship platform, Capsa, essentially serves as an internal compass for any AI model. Rather than being some random guessing or coherent-sounding gibberish, Capsa allows models to find uncertainty in their output; to signal to them when they may be confused, biased, or working with corrupt data (a weak signal).

Imagine it as a digital conscience. Where traditional AIs simply fake confidence, Capsa is the voice in the back of their head saying, “This could be wrong – you may want to double-check before someone gets hurt.”

Approaching hallucinations: MIT spinoff trains AI to say when it does not know

⚠️ Why This Matters: When Guessing Becomes Dangerous

Everyone knows someone who provides dubious advice with far too much confidence. But when that someone is a machine choosing your treatment, controlling a national power grid, or supporting self-driving cars interpreting the road, that “know-it-all” attitude can be life-threatening.

What most people don’t realize: AI models today are chronic guessers.

Often, they are confidently wrong – because they were not built to know when they don’t know. Themis AI is changing this by giving machines a form of self-awareness: the ability to understand their uncertainty.

🚀 From Lab Breakthroughs to Industry Impact

Themis AI’s journey began in MIT labs, where the team first focused on bias and reliability in high-consequence AI systems. Work in academia caught the attention of Toyota, which underwrote research to develop safer self-driving technologies. Why? Because in a self-driving vehicle, the cost of a road hallucination is not misinformation – it is lives.

Later, their algorithms solved a critical concern in facial recognition – detection and remediation of bias, social or gender, by rebalancing the training data of the AI. However, the true light-bulb moment was seeing how this could be used across industries.

Fast-forward to today, and Themis AI has:

  • Prevented telecom network planning mistakes because of the AI highlighting uncertainty in data inference.
  • Helped oil & gas companies make better decisions related to seismic analysis by showing when the AI was making inferences with the incomplete geodata.
  • Improved drug discovery processes, ensuring AI models would only pursue candidates where data-backed, confident predictions.
  • Developed safer chatbots, trained them to avoid confidently giving false or misleading information.

Approaching hallucinations: MIT spinoff trains AI to say when it does not know

💡 A game changer in Edge devices

What makes Themis’ approach even more significant than I haven’t mentioned is the area of edge computing. Devices that have limited processing power, like drones, smartphones, and medical wearables, have very different considerations, normally restricting how intelligent their capabilities can be.

With the technology developed by Themis AI, smaller, on-device models are smart enough to know when they are in over their heads and only ask for server-side help if truly necessary. This is a massive step forward in enhancing the real-world intelligence of AI more intelligently, quickly, and possibly even more energy efficiently.

🧬 Uncertainty is a Friend in Healthcare

Approaching hallucinations: MIT spinoff trains AI to say when it does not know

In fields like oncology or drug development, the potential impact of Themis AI is huge. Pharmaceutical researchers are using tools that not only analyze molecular data, but also can discern how much is a solid insight vs. an algorithm-driven guess.

This saves time. Saves money. And possibly save lives by directing attention to the best potential drug candidates rather than squandering millions on AI hallucinations.

“We’re not just marking when an AI is wrong. We’re giving it a sort of sixth sense for when it is possibly wrong,” said co-founder Alexander Amini.

🛡️ The Future of Responsible AI: Know What You Don’t Know

AI is infiltrating our lives—whether we realize it or not. We are already using AI for financial modeling, transportation systems, predicting disease, and national security. These AI tools are informing high-stakes decisions.

Though AI can be very sophisticated, it can act like a very overconfident idiot if we do not make it self-aware.

That’s why Themis AI’s work is so important.

Themis AI is creating humble AI. Not because it is safer, but because it is smarter. Smarter to say, “I don’t know.” Smarter to ask for help. Smarter to shut up when it is wrong instead of opening its mouth and proving it is an idiot.

As the race ramps up to get faster and faster AI models, we need wiser AI models, not just faster AI models. Themis AI is quietly leading this charge. If they succeed, maybe the next generation of intelligent systems will even be the most human quality of all:

Knowing when to stop and think.

Approaching hallucinations: MIT spinoff trains AI to say when it does not know

Leave a Comment