![Global rules on foreign direct investment](https://www.nortonrosefulbright.com/-/media/images/nrf/nrfweb/knowledge/publications/us_24355_legal-update--fdi-alert.jpeg?w=265&revision=a5124a65-abf9-40e4-8e96-9df39ffdb212&revision=5250068427347387904&hash=96B456347C3246E5649838DF281C5F5D)
Publication
Global rules on foreign direct investment (FDI)
Cross-border acquisitions and investments increasingly trigger foreign direct investment (FDI) screening requirements.
Author:
Canada | Publication | March 4, 2021
The Information and Privacy Commissioner of Ontario recently unveiled new guidance for virtual care providers: Privacy and Security Considerations for Virtual Health Care Visits: Guidelines for the Health Sector. Although the guidance relates specifically to custodians subject to Ontario’s Personal Health Information Protection Act, every health care provider in Canada who is currently using any virtual technology to provide health care services should review the guidance. Most of the same principles exist under the privacy laws of other provinces, and thus the guidance will be useful for health professionals no matter where they practice.
The guidelines outline a number of steps to enhance privacy and security in virtual care. First, health care providers are reminded that, in addition to the privacy law, their own regulated health professions college will have professional standards that also apply, and there may be other relevant provincial laws related to health care provision that should not be overlooked.
The guidelines recommend that when providing health care virtually, a health care provider should first conduct a privacy impact assessment (a PIA) of the tools and processes that will be used. Some provincial privacy laws make it mandatory to conduct a PIA. Even where it isn’t mandatory, doing a PIA is good practice to ensure that any privacy or security risks are identified and mitigated at the outset.
Providers are advised to develop and implement a “virtual health care policy” to address the specific issues and risks associated with the provision of care virtually. The provider should ensure that staff receives appropriate training in privacy and security to reduce risks. Delivering care virtually raises new privacy and security risks to patient personal information and to the privacy rights of health care providers, staff and others; remote connectivity and working from home may materially impact patients’ ability to protect their confidentiality. Providers need to take that into account.
The guidelines also indicate that custodians should have an information security management framework for monitoring, assessing and mitigating security risks, and must have a privacy breach protocol that is triggered in the event of a breach.
The guidelines provide helpful information for health care providers when selecting a vendor for a virtual care platform, including recommended contractual terms and key issues to avoid. The guidelines also address key issues such as email and messaging technology, videoconferencing and patient portals. Detailed advice about safeguards and consent is also included.
All health care providers who are currently providing care virtually should review this new guidance and take steps to ensure that their own practices appropriately protect patient privacy, and meet the legal and professional standards in their jurisdiction.
Publication
Cross-border acquisitions and investments increasingly trigger foreign direct investment (FDI) screening requirements.
Publication
On February 2, 2024, the Belgian Presidency of the Council of the European Union confirmed that the Committee of Permanent Representatives had signed the Artificial Intelligence (AI) Regulation, referred to as the AI Act. Approval by the EU Parliament followed on 13 March 2024, and the AI Act is likely to appear in the EU’s Official Journal around May 2024. The AI Act aims to establish a stringent legal framework governing the development, marketing, and utilisation of artificial intelligence within the region, thereby marking a significant advancement in the regulation of this burgeoning domain.
Publication
The EU’s Artificial Intelligence Regulation, commonly referred to as the AI Act, is expected to come into force during the summer of 2024 (the AI Act). The AI Act will be the first comprehensive legal framework for the use and development of artificial intelligence (AI), and is intended to ensure that AI systems developed and used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023