AI Policy
HealthCentreApp is preparing AI supported features to help users understand and organise health information. These capabilities are not yet available. We will update this policy before launch.
Purpose
Our aim is to support health equality by offering low cost, high impact digital health tools. AI features are intended to help users understand information and prepare better conversations with care providers.
What AI may be used for
When available, AI may help with:
- Drafting symptom summaries and suggested questions
- Explaining health terms in plain language
- Summarising user entered notes and records
- Highlighting possible next steps and when to seek urgent care
- Supporting workflows such as CareRequest, RemoteCare, Medication, and ChildHealth features
Availability may vary by plan, country, and feature rollout.
What AI will not do
- Provide emergency services or real time clinical monitoring
- Guarantee accuracy or completeness
- Replace clinical judgement or professional care
- Make decisions for you without your control
Third party AI models
We expect to use one or more trusted third party AI models to deliver AI features. We have not selected a provider yet.
Before launch, we will update this policy to reflect the categories of providers we use (for example, model hosting and safety filtering), and the safeguards applied for any international processing.
Your control and consent
- AI features will be optional.
- We will ask for your permission before AI accesses health information.
- You can choose whether AI can access existing records, recent measurements, or recent CareRequests.
- You can withdraw consent at any time by turning off AI access or removing data.
Data minimisation
We aim to use the minimum information needed to provide an AI feature. You should avoid entering unnecessary personal details into free text fields.
Data use for training
We do not intend to use your data to train public AI models. If we introduce model improvement using user data in the future, we will explain it clearly and provide appropriate choices.
Safety and quality controls
To reduce risk, we plan to use safeguards such as:
- Clear warnings and reminders to seek professional advice
- High risk symptom prompts that encourage urgent escalation
- Content filters for unsafe outputs
- Monitoring for misuse and abuse patterns
- Continuous improvement based on user feedback
Bias and fairness
AI can reflect bias in data. We will work to test and reduce bias, especially for users in low and middle income countries. We prioritise clarity, safety, and equity.
Updates
We will update this AI Policy as AI features evolve and as we confirm third party providers and safeguards.