When it comes to trustworthy artificial intelligence (AI) in healthcare, Prof Dr Freimut Schliess from Profil GmbH, argues that now is the time to deliver
In Europe, chronic diseases account for 86% of deaths and 77% of disease burden, thereby creating a tremendous challenge on societies. At the same time, digitalisation is bringing huge technological and cultural opportunities. In healthcare, the usage of data-driven forecasts on individual and population health by the integration of artificial intelligence (AI)-enabled algorithms has the potential to revolutionise health protection and chronic care provision, while securing the sustainability of healthcare systems.
European start-ups, small and medium-sized enterprises (SMEs) and large corporations offer smart AI-enabled digital solutions, backed by medical evidence. They could help to achieve the WHO Sustainable Development Goals and reduce premature mortality from major chronic diseases by 25% up to 2025.1 Some, but too few of the solutions are available in the markets. A paradigm example is the increasing availability and reimbursement of closed-loop metabolic control (artificial pancreas) systems for persons with diabetes.2
“There is no excuse for inaction as we have evidence-based solutions.” This statement was made by the co-chairs of the WHO Independent High-Level Commission on Noncommunicable Diseases in 2018 – paired with the appeal on global governments to “start from the top” in taking action against the global disease burden.3
In April 2019, a high-level expert group on AI set up by the European Commission (EC) published Ethics Guidelines for Trustworthy AI.4 The guidelines lay down criteria for trustworthy AI with emphasis on ethical, legal and social issues and follows the goal to promote trustworthy AI.
Three foundations of trustworthy AI
First, trustworthy AI should respect all applicable laws and regulations.
Second, trustworthy AI should adhere to the ethical principles of respect for human autonomy (e.g. people should keep full and effective self-determination); prevention of harm (with paying particular attention to vulnerable persons); fairness (e.g. ensuring that people are free from discrimination and stigmatisation and can seek effective redress against AI-enabled decisions); and explicability (e.g. a full traceability, auditability and transparent communication on system capabilities particularly in case of “black-box” algorithms).
And third, trustworthy AI should be robust and ensure that, even with good intentions, no unintentional harm can occur.
Seven requirements for trustworthy AI
They were inspired by the ethical principles and should be met through developers, deployers and end-users.
First, human agency & oversight (e.g. decision autonomy, governance); second, technical robustness and safety (e.g. security & resilience to attack, accuracy & reproducibility of outcomes); third, privacy & data governance (e.g. data integrity & protection, governance of data access); fourth, transparency (data, systems, business models); fifth, diversity, non-discrimination and fairness (e.g. avoidance of bias, co-creation, user-centrism); sixth, societal and environmental well-being (e.g. promotion of democracy, ecological sustainability); and seventh, accountability (e.g. forecast quality, auditability, report of negative impacts).
The EIT Health pilot of the EC’s Ethics Guidelines for Trustworthy AI
Most recently, the guidelines underwent an early reality check through EIT Health – a public-private partnership of about 150 best-in-class health innovators backed by the European Union (EU) and collaborating across borders to deliver new solutions that can enable European citizens to live longer, healthier lives.5
A survey among start-ups and entrepreneurs, as well as EIT Health partners from industry, academia and research organisations, indicated a currently low (22% of respondents) have awareness of the guidelines. More than 60% of respondents were aware that their AI application will need regulatory approval.
Among the seven requirements on trustworthy AI, the highest priority was given to privacy & data governance, technical robustness & safety, followed by traceability and human agency & oversight.
Lower ranked, though still relevant, were the ethics of diversity, non-discrimination & fairness (respondents are working on it following e.g. an iterative approach to improving data sets and removing biases), accountability (currently traditional auditing, post-market surveillance and procedures for redress appear to be relied on) and societal and environmental well-being (the former appears self-evident for health solutions, consciousness in the context of health solutions for the latter is possibly not yet well established).
It is time to deliver
Clearly, there is a conflicting interdependence between a comprehensive resolution of every conceivable ethical, legal & social issue, the imperative to eventually break down the longstanding barriers to personalised and preventative healthcare (which would save millions of lives) and the requirement for European societies to tackle global competition for a worldwide market penetration of trustworthy AI.
We agree with the recent TIME TO DELIVER appeal from the World Health Organization (WHO).3 In collaboration with vital communities, such as EIT Health, European governments should go ahead in establishing a productive balance between promoting innovation, welcoming global competition and defining healthcare-specific ethical, legal and social requirements for trustworthy AI.
We welcome the idea of establishing “world reference testing facilities for AI” recently contributed by Roberto Viola (Director General of DG Connect at the EC).6 EIT Health should be in a privileged position to orchestrate such testing facilities for AI, through providing secured validation environments applying the high ethical and regulatory standards of clinical contract research.
Here partners from innovation, education and business should collaborate on concrete AI-enabled solutions for an effective assessment of real risks and opportunities, followed by the provision of a solution-specific ELSI7 dossier in order then to join forces to launch the trustworthy AI-enabled solution to the markets and scale-up the business model.
That way, European societies could be impactful in breaking down innovation barriers and eventually providing thoroughly validated solutions globally to the persons who need them most.
References
1 As proclaimed by the WHO in 2012 (Gulland, A. 2013, BMJ 346:f3483; http://www.who.int/nmh/en/)
2 Schliess, F. et al. 2019, J. Diabetes Sci. Technol. 13(2):261-267, 2019; https://www.eithealth.eu/en_US/close; https://www.eithealth.eu/diabeloop-for-children; https://www.eithealth.eu/en_US/d4teens. close, diabeloop-for-children and d4teens are innovation projects dedicated to closing the loop in diabetes care. They are supported by EIT Health, a network of best-inclass health innovators that collaborates across borders and delivers solutions to enable European citizens to live longer, healthier lives. EIT Health is supported by the EIT, a body of the European Union.
3 https://www.who.int/ncds/management/time-to-deliver/en/
4 https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
5 https://www.eithealth.eu/-/eit-health-holds-panel-on-ai-and-ethics-at-the-world-health-summit
6 https://www.eithealth.eu/ai-and-health
7 ELSI, ethical, legal and social issues.
Please note: This is a commercial profile