The adoption of Artificial Intelligence in medicine is no longer just a technological decision; it has also become a decision of governance, compliance, and liability. On February 11, 2026, the Federal Council of Medicine (CFM) approved a specific regulation to govern research, development, auditing, monitoring, training, and the responsible use of AI models, systems, and applications within the medical environment.

A point that significantly helps organize responsibilities (including contractual ones) is the distinction made by the resolution between “model,” “system,” and “application.” The model is the algorithmic component itself, capable of generating classifications, predictions, and recommendations. The system is the solution that processes data and produces results that influence health-related processes, such as decision support or embedded systems. The application, however, is the concrete use of that system within a specific context, such as clinical care, forensic examination, teaching, research, or management. In practice, this matters because the same model can be integrated into different systems, and the same system can have distinct uses, involving different risks and duties. In other words: it is not enough to “have AI”; it is necessary to map where, how, and with what impact it is used.

The resolution also reinforces principles that must guide implementation: safety, evidence-based transparency, equality (isonomy), and ethics. The concept of proportionality appears as a central theme: auditing and monitoring should vary according to the risk and impact of the application, seeking practical feasibility without ignoring legitimate secrets (such as industrial and commercial secrets). At the same time, transparency cannot be merely rhetorical: it must be supported by metrics and indicators of accuracy, efficacy, and safety, with clear communication when the information is directed at patients and users.

At the heart of the text is the reaffirmation that AI is a support tool, not a substitute for the physician. Professional autonomy is preserved: physicians must have access to information regarding purpose, functionality, limitations, and risks, and they may refuse systems lacking adequate validation, pertinent certification, or those that conflict with ethical and legal parameters. However, this is accompanied by very objective duties.

The physician remains ultimately responsible for clinical, diagnostic, and therapeutic decisions; they must exercise critical judgment over recommendations, stay updated on the system’s capabilities and limits, use only solutions compatible with current regulations, and record in the medical charts when AI was used for decision support.

For clinics and hospitals, this tends to require real (and documentable) adjustments: training, protocols, usage criteria, refusal mechanisms, and validation routines that allow for effective supervision — not just “supervision on paper.”

In the physician-patient relationship, the resolution points to immediate operational impacts. Patients have the right to be informed, in a clear and accessible manner, when AI is used as a relevant support in their care.

Furthermore, there is an express prohibition: the communication of diagnoses, prognoses, or therapeutic decisions cannot be delegated to AI without human mediation. Added to this is the respect for patient autonomy, including the right to refuse the use of AI in an informed manner. In practice, this may require a revision of care and communication workflows, as well as special attention to the design of interfaces and tools (such as chatbots and assistants) to prevent automation from overstepping ethical and regulatory boundaries.

Regarding data, the resolution reinforces requirements compatible with the sector's criticality: the processing of information (especially sensitive personal data) must strictly observe the LGPD (General Data Protection Law) and health information security standards.

The text emphasizes confidentiality, integrity, and protection against unauthorized access, destruction, loss, alteration, or leakage, with technical and administrative measures compatible with the “state of the art” and the sensitivity of the data involved. In more direct terms: in healthcare, AI projects must be born with privacy and security by design and by default (“privacy by design” and “privacy by default”), especially when integrated with medical records and clinical databases.

In the field of ethical-professional liability, the message is objective: the use of AI does not reduce medical duty. The physician remains fully responsible for acts performed through the use of AI, without prejudice to other applicable liabilities, and the regulation further reinforces the duty to report failures, relevant risks, or improper uses that could compromise patient safety or the quality of care. This elevates the importance of internal incident reporting channels, risk management routines, and the documentation of corrective and preventive measures — a point that also impacts the relationship with vendors.

Another relevant pillar is the risk classification of AI solutions, which must be carried out beforehand by the user institution or developer. The assessment considers, among other factors, the potential impact on fundamental rights and health, the criticality of the context of use, the degree of model autonomy, the level of human intervention, and the quantity/sensitivity of the data. The annex outlines categories such as low risk (administrative or low clinical impact uses), medium risk (support for important decisions with risk mitigatable by human supervision), and high risk (high potential for physical, psychological, or moral harm, requiring rigorous validation, regular audits, and continuous monitoring). One point worth noting: classification is not “forever.” The regulation provides for reassessment — the solution may change categories as it evolves, gains autonomy, or changes context.

Finally, the resolution addresses institutional governance in a very practical way: it calls for internal processes to ensure safety, quality, and ethics; encourages the mitigation of discriminatory biases through continuous monitoring; reinforces interoperability; values auditable and parameterizable systems (avoiding “black boxes” that make governance impossible); and requires life-cycle management with periodic reviews, corrections, controlled updates, and continuous improvement. It also provides for access by control bodies, when requested by a competent authority, to reports and information necessary for independent supervision, within legal limits. This set of measures tends to have a direct impact on vendor contracts: what reports exist, what metrics are monitored, how changes are recorded, how auditability is ensured, and what levels of explainability/contestability will be delivered, compatible with the risk and context of use.

Regarding its effectiveness, the resolution is set to enter into force 180 days after its publication and allows for a transition period when a new guideline imposes a new duty and the transition is indispensable for proportional and efficient compliance. For clinics, hospitals, and health technology companies, the message is that the responsible implementation of AI, from now on, necessarily involves documentation, processes, training, risk management, and contractual alignment — with the patient and clinical safety at the center.