File Download
Supplementary
-
Citations:
- Appears in Collections:
Book Chapter: When Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices
Title | When Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices |
---|---|
Authors | |
Keywords | artificial intelligence (AI) governance liminality medical devices risk |
Issue Date | 2021 |
Publisher | Cambridge University Press |
Citation | When Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices. In Laurie, G ... et al (Eds.), The Camx`bridge Handbook of Health Research Regulation, p. 277-286. Cambridge, UK ; New York, NY: Cambridge University Press, 2021 How to Cite? |
Abstract | Artificial intelligence and machine learning (AI/ML) medical devices are able to optimise their performance by learning from past experience. In healthcare, such devices are already applied within controlled settings in image analysis systems to detect conditions like diabetic retinopathy, for instance. In examining the regulatory governance of AI/ML medical devices in the United States, it is argued that the development and application of these devices as a technical and social concern whether in research or in clinical care must proceed in tandem with their identities in regulation. In the light of emerging regulatory principles and approaches put forward by the International Medical Device Regulators Forum, and endorsed by the US Food and Drug Administration, conventional thinking about clinical research and clinical practice as distinct and separate domains needs to be reconsidered. The high connectivity of AI/ML medical devices that are capable of adapting to their digital environment in order to optimise performance suggest that the research agenda persist beyond what may be currently limited to the pilot or feasibility stages of medical device trials. If continuous risk-monitoring is required to support the use of software as medical devices in a learning healthcare system, more robust and responsive regulatory mechanisms are needed, not less. |
Persistent Identifier | http://hdl.handle.net/10722/311717 |
ISBN | |
Series/Report no. | Cambridge Law Handbooks |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ho, WLC | - |
dc.date.accessioned | 2022-04-01T09:12:17Z | - |
dc.date.available | 2022-04-01T09:12:17Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | When Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices. In Laurie, G ... et al (Eds.), The Camx`bridge Handbook of Health Research Regulation, p. 277-286. Cambridge, UK ; New York, NY: Cambridge University Press, 2021 | - |
dc.identifier.isbn | 9781108475976 | - |
dc.identifier.uri | http://hdl.handle.net/10722/311717 | - |
dc.description.abstract | Artificial intelligence and machine learning (AI/ML) medical devices are able to optimise their performance by learning from past experience. In healthcare, such devices are already applied within controlled settings in image analysis systems to detect conditions like diabetic retinopathy, for instance. In examining the regulatory governance of AI/ML medical devices in the United States, it is argued that the development and application of these devices as a technical and social concern whether in research or in clinical care must proceed in tandem with their identities in regulation. In the light of emerging regulatory principles and approaches put forward by the International Medical Device Regulators Forum, and endorsed by the US Food and Drug Administration, conventional thinking about clinical research and clinical practice as distinct and separate domains needs to be reconsidered. The high connectivity of AI/ML medical devices that are capable of adapting to their digital environment in order to optimise performance suggest that the research agenda persist beyond what may be currently limited to the pilot or feasibility stages of medical device trials. If continuous risk-monitoring is required to support the use of software as medical devices in a learning healthcare system, more robust and responsive regulatory mechanisms are needed, not less. | - |
dc.language | eng | - |
dc.publisher | Cambridge University Press | - |
dc.relation.ispartof | The Camx`bridge Handbook of Health Research Regulation | - |
dc.relation.ispartofseries | Cambridge Law Handbooks | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | artificial intelligence (AI) | - |
dc.subject | governance | - |
dc.subject | liminality | - |
dc.subject | medical devices | - |
dc.subject | risk | - |
dc.title | When Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices | - |
dc.type | Book_Chapter | - |
dc.identifier.email | Ho, WLC: cwlho@hku.hk | - |
dc.identifier.authority | Ho, WLC=rp02632 | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.1017/9781108620024.035 | - |
dc.identifier.hkuros | 332283 | - |
dc.identifier.spage | 277 | - |
dc.identifier.epage | 286 | - |
dc.publisher.place | Cambridge, UK ; New York, NY | - |