Priya Narasimhan
Senior Director, Product Management
View Bio PageA custom AI tool created by Mathematica and its partner helped the Centers for Medicare & Medicaid Services (CMS) cut response times to certain hospital questions by 35 percent, delivering faster, clearer answers to providers. As agencies experiment with AI, CMS shows what governed AI can look like in practice, speeding operations without sacrificing accuracy, transparency, or expert oversight.
A generative AI-powered chatbot developed by Mathematica helped CMS cut annual costs for an analysis of an agency rule by roughly 94 percent. The AI solution automated time-intensive document comparison, freeing experts to focus on higher-value policy analysis. The result was not just more efficient rulemaking but a scalable model for using AI to support smarter, more transparent decision making in health policy and beyond.
Most public evaluations of large language models (LLMs) rely on simplified or artificial data, making it hard to tell whether these tools can conduct the complex analyses used for real-world policymaking and research. To close that gap, Mathematica developed a prototype for a cloud-based LLM evaluation framework that tests how well different AI models analyze complex, survey-based data, helping organizations understand when and how AI can be used responsibly to support accurate, transparent decision making.
“In a recent project with the National Science Foundation, we demonstrated how generative AI can strengthen transparency and trust in federal statistics. We built a prototype platform that tracks how federal data assets are used across research, media, policy, and public reporting.
“By using AI as a classifier, quality-checker, chatbot, and coding assistant—while keeping humans in the lead—we improved data quality, reduced burden on government staff, and created a scalable model for turning complex information into accessible, trustworthy insights.”
Mathematica applies AI responsibly, ethically, and with rigorous human‑in‑the‑lead oversight to safeguard privacy, security, and public trust.
Guided by our AI Principles & Position Statement, we use vetted enterprise‑grade tools, apply strict data‑protection and governance controls, and ensure that all AI‑supported work is reviewed by experts for accuracy, fairness, and alignment with client goals. Our comprehensive safeguards framework reflects our commitment to transparency, strong governance, and the responsible use of trusted data.
Clients interested in learning more about our organizational AI guardrails can contact us at info@mathematica-mpr.com for a detailed overview.