.png)
Hattrick IT has recently participated in an academic research initiative focused on the application of AI risk-assessment frameworks in healthcare, contributing real-world industry insight to a series of workshops led by UK-based researchers under the RADIANT-CERSI initiative.
The workshops explored how existing AI risk management frameworks can be selected, adapted, and applied across the AI product lifecycle, moving beyond theoretical discussion into practical, project-level decision-making. The sessions brought together researchers and healthcare innovators to examine how structured risk approaches can support safer and more responsible AI development.
Representing Hattrick IT, Mariano Ayende, Regulatory Affairs and Quality Assurance Specialist, analyzed these frameworks through a concrete use case: GINA, Hattrick IT’s digital health initiative designed to provide young people with safe, anonymized access to medically verified sexual and reproductive health information, including an AI-based chat component.
As part of the initiative, the team applied RADIANT’s adaptation of the NIST AI Risk Management Framework, assessing its suitability for early-stage digital health products. This hands-on application proved valuable not only for identifying potential risks early in development, but also for understanding where current regulatory tools are robust—and where further refinement is still needed.
The collaboration highlights Hattrick IT’s ongoing commitment to responsible AI, regulatory readiness, and evidence-based innovation, as well as its role in bridging academic research with real-world healthcare software development.
We thank the RADIANT-CERSI team and participating researchers for creating an open and constructive space for collaboration, and we’re proud to contribute industry experience to the evolving conversation on AI risk management in healthcare.