Intelicare Responds

After I wrote about Intelicares monitoring service I thought it might be an idea to reach out to Intelicare and actually ask some of the questions I had posed in the post. So yesterday I sent them a list of questions. I wasn’t sure how long it would take them to answer, however I was pleasantly surprised to find an email from Jason Waller, the CEO of Intelicare, in my inbox this morning. With his permission I have included his answers (in bold) with the list of questions (in italics) I sent below.

  • When a subject leaves the service what happens to the data collected on them?
    • We keep our data segregated by type in discrete databases so that we can easily separate personal details from the behavioural data. Consequently, when a service is cancelled, the behavioural data is anonymised and stored. However, the data is owned by the individual so if they requested deletion, that is what we would do.
  • Do you intend to use the data gathered to supply “insights” to third parties (ie not just to clients using your service)?
    • We believe there may be opportunities in the future to make anonymised behavioural data available for clinical research, with prior permission of the client.
  • Given you are actively looking at devices that track heart rates from a distance (Investor Presentation) what steps are you taking to ensure the data you gather is safe?
    • We have extensive security architecture in place to protect health and other personal data. We recently engaged an independent security consultant to review our data protection. Security across InteliCare was found to be well managed and aligned with industry best practice and has been assessed to pose a Low risk from a management/procedural security perspective. It was also determined that the company has demonstrated full compliance with the thirteen (13) Australian Privacy Principals (APPs) which are regulated by the Office of the Australian Information Commissioner (OAIC). Additionally, we conduct ongoing automatic penetration testing to identify vulnerabilities.
  • Do you have a Privacy policy?
    • Yes, here. We also have extensive internal data protection policies and frameworks.
  • When a new service is activated do you require active consent from the person being monitored?
    • Yes, and this must be obtained by the carer or family who are the users of the app.

Thanks to Jason Waller for the quick response.

Striking a balance between Autonomy and Safety

InteliCare’s 24/7, in-home technology gives you smart insights, direct to your phone, that helps detect and prevent falls and health issues before they happen.

That’s the pitch given by Intellicare, a five year old company that is hoping to use Artificial Intelligence and in-home IoT devices to create a sort of virtual care facility.

The vision is to use devices that track a person around the house (via a pendant worn by that person) to build up a data set of “normal activity”, that they can then use to catch changes of behaviour down the track. Things like changes in bathroom activity could indicate the onset of a Urinary Tract Infection, or lack of movement in areas where there really shouldn’t be might indicate that the person has had a fall. This will allow the persons family or Care Agency to be more proactive in providing the care that an elderly person or person with disability might require in home, without the need for frequent visits from human staff.

Combine this with a suite of tools for Agencies to be able to manage their clients and plans for ever more complex monitoring devices and you’ve got a recipe for the Virtual Care Facility of the future!

On the face of it, it sounds pretty good. Let’s face it we want to keep our ageing and disabled populations in their own homes as much as possible. Developing a system that is unobtrusive and provides health data that can prevent escalation of issues that if left unchecked can lead to institutionalisation, can only be a good thing.

On the other hand there are some issues that need to be addressed.

Firstly there’s the subject of autonomy. If a person agrees to the installation of this service and is comfortable with their movements and activities being tracked, that’s one thing, but what if the system becomes mandatory? Intellicare is already talking about their system as being similar to a house alarm for Insurance Agencies. Will this or similar systems become requirements for health insurance in the future? Will In Home Care Agencies require this or systems like this?

What ability does the person being monitored have to turn off the system if they do desire? The elderly and PWD might not want the system to know when they’re engaging in intimate activities.

Secondly there’s the subject of what data is being captured and how it’s being used. Intellicare describes itself as “an AI company, coupled with healthcare”. Their services depend on a constant flow of data building up an intimate picture of the activities of the people being monitored. At first this will be via sensors that monitor when a person enters a room, (basically each room has a beacon that senses when the pendant gets to within a certain range). However Intellicare is already talking about expanding out their IoT range to include devices that they describe as “radar like” that can monitor heart rates remotely. This would remove the need for the pendant, as the “people radars” would be able track from second to second the exact location of everyone in range.

The data generated is currently used to inform their clients (the families, Care Agencies or possibly Insurance Agencies) of the health and status of each person under their care. However what else could that data be used for? Does Intellicare intend to monetise the data further by generating insights that it can sell to third parties? What happens when a person leaves the scheme? Does their data remain or does Intellicare delete it? Is there provision for the use of this data in academic research?

These are all questions that need answers if we’re going to be going to be placing ourselves into the hands of Machine learning. No matter how fancy the tech, Informed Consent should always be required for stuff like this.