Please login to the form below

Not currently logged in
Email:
Password:

Digital intelligence blog

Pharma insight on digital marketing, social media, mobile apps, online video, websites and interactive healthcare tools

Safety first: AI adoption in the healthcare sector

AI's transformative impact on healthcare

AI

While artificial intelligence (AI) is already being used by many businesses to augment and improve the way that they work, many implementations of the technology are somewhat mundane, such as the automation of expense processing or the deployment of a chatbot on an e-commerce website. However, in the healthcare sector, the convergence of big data, cloud computing and machine learning, and the emergence of related technologies including virtual and augmented reality, offers the potential to realise many exciting benefits. For example:

  • Smartphone apps encourage individuals with the proactive management of a healthy lifestyle. Healthcare professionals can better understand the day-to-day patterns and needs of the people they care for
  • AI detects diseases, such as cancer, more accurately and in their early stages – the technology is already more accurate and more consistent at interpreting MRI and PET scans
  • Predictive analytics tools support clinical decision-making and actions as well as prioritise administrative tasks, using pattern recognition to identify patients at risk of developing a condition
  • AI helps clinicians take a more comprehensive approach for disease management and better coordinate care plans
  • By directing the latest advances in AI to streamline the drug discovery and
    drug repurposing processes, there is the potential to significantly cut both the time to market for new drugs and their costs. A recent report by the House of Lords’ AI Select Committee, which devotes a whole chapter to healthcare and AI, highlights significant sectoral opportunities. The Academy

of Medical Science, giving evidence to the Select Committee, said that “the impact of artificial intelligence on ... the healthcare system is likely to be profound” – through more efficient research and development, the adoption of better methods of healthcare delivery and more informed clinical decision- making. Patients will also be better informed and able to manage their own health needs.

A note of caution

While there is good reason for optimism, we also counsel caution. Poor results and outcomes, misuse, a reliance on bad data, privacy impacts, a perceived lack of transparency, discrimination and an adverse effect on employment could all impact AI’s potential.

The Royal Free London NHS Foundation Trust’s 2015 partnership with DeepMind is a case study of what can go wrong, despite the very best of intentions. DeepMind worked with the Royal Free to develop an app that Royal Free then deployed to assist diagnosis of acute kidney injury (AKI). As part of the development, Royal Free had provided DeepMind with the personal data of around 1.6 million patients. When this came to light, the Information Commissioner’s Office investigated, and ruled that Royal Free had failed to comply with the Data Protection Act 1998 when it provided the data to DeepMind. Although the app did not use artificial intelligence or deep learning techniques, DeepMind’s involvement highlighted a number of the potential issues involved in the use of patient data to develop AI.

The Data Protection Act 1998 has since been replaced by the General Data Protection Regulation and with it comes the risk of far greater fines and sanctions for getting it wrong.

A safety-first approach

The Select Committee’s report recommended a universal code of ethics for AI, as well as calling for an appropriate legal and regulatory framework for AI developers and users: ‘Maintaining public trust over the safe and secure use of their data

is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data.’

Businesses developing AI for healthcare sector deployment should address the following key considerations:

  • Ethical conduct. Companies should develop their own principles for the ethical development and use of AI, as well as participating in cross-sector initiatives. Such codes can have value externally, such as when selecting and contracting with supply chain partners
  • Strong governance. Governance committees should be established to ensure that
    codes and principles are observed, issues escalated and independent or ‘peer’ reviews undertaken prior to a product’s deployment
  • Regulatory compliance. Certain software algorithms will be regulated as medical devices and thus subject to CE marking, either through self- certification or through notified bodies such as the MHRA. CE marking is a certification mark that indicates conformity with health, safety and environmental protection standards for products sold within the EEA. Other regulations need to be considered, as applicable, such as the Product Liability Directive and the new Medical Device Regulation that comes into effect in 2020
  • Data protection and privacy. There are numerous data protection issues to be considered when healthcare and technology converge. The GDPR introduced a right to explanation, which means that the logic of an automated decision can be challenged and the results tested – businesses will need to think carefully before building a ‘black box’ system that cannot explain itself. Where required by the GDPR, privacy impact assessments will be needed and a ‘privacy by design’ approach may be advisable
  • Avoiding bad data. The outcome achieved by an AI system will only ever be as good as the quality of data on which it bases that outcome. There are many variables to determine the quality of inputted data: are the data sets ‘big’ enough, is ‘real-world’ data being used, is the data corrupt, biased or discriminatory? This problem gives tremendous power to those who own large repositories of accurate personal data and we expect the issue to become a significant focus for regulatory and contractual protection in the coming years, especially in the healthcare sector.
  • Partnering with others. When acquiring AI developed by a third party, or when developing it in collaboration with others, businesses should take the opportunity to define the relationship and responsibilities during the contracting process. Important aspects of this include:
    • -  Preparation and Planning: set clear objectives and assemble the best team
    • -  Vendor and product selection: allow sufficient time for procurement and contracting, develop and validate business requirements, and carefully assess the market
    • -  Cost: agree a clear pricing model, which could be licence- or transaction-based and may also have elements of risk and reward
    • -  IP: deal with who owns the software, intellectual property rights and other work product, and ensure licences are wide enough
    • -  Data: establish who owns the underlying data including outputs
    • -  Scope: carefully document the Scope of Services, translating promises made in the sales cycle into commitments that can be relied upon
    • - Exit: provide for knowledge transfer services and exit support in the case of an unforeseen termination, to avoid supplier lock-in-
    • - Compliance: be clear about which party is responsible to meet which regulatory requirements

    Conclusions

    AI is happening now and is going to have a major transformative impact on the healthcare industry and our economy in general. Its use will cause disruption and raise risks and challenges that must be assessed and understood. At its most fundamental, this is about artificial systems making real-world decisions that will affect

    real people, and at a time of growing public debate about the use and commercialisation of

    personal data on many online platforms, societal concerns and regulations will likely impact the potential value realised from AI. Use and storage of personal information is especially sensitive

    in the healthcare sector – with questions about privacy, issues of fairness and equity which
    may arise from bias in data, as well as concerns about transparency and accountability in the
    use of massively complex (and perhaps not fully understood) algorithms. Businesses and other users of data for AI will need to continue to evolve their business models related to data use in order to address societies’ concerns. Furthermore, regulatory requirements and restrictions will continue to evolve and will differ from country to country, as well as from sector to sector. Those involved in an AI-affected business (and spoiler: that’s going to be everyone) need to educate themselves and, where appropriate, seek support and guidance from external experts. In a world where not only can we not know everything, but increasingly we can’t know more than a small fraction of anything – we should focus on what we do need to know to follow a safety- first approach and make the right decisions.

Tim Wright is a Partner and Antony Bott is a Sourcing Consultant, both at Pillsbury Winthrop Shaw Pittman LLP

24th October 2018

From: Research, Regulatory

Share

Tags

Featured jobs

Subscribe to our email news alerts

PMHub

Add my company
Research Partnership

We are the largest independent healthcare market research and consulting agencies in the world. Trusted partner to the global pharmaceutical...

Latest intelligence

World Diabetes Day: Interaction and impact of diabetes on mental health
For World Diabetes Day on the 14th November 2018, Nisha Shahrukh - Medical Writer at Mednet Group has written an article depicting the impact diabetes has on mental health. Including...
EU
Innovation in merger control and the impact on the pharmaceutical sector
Is focusing on pipeline products enough to assess regulatory risks?...
Nudge-nudge, think-think
Chris Ross examines the personal complexities of human behaviour – and explains why fun, emotion and peer endorsement could be key to designing effective behavioural change programmes...

Infographics