
AI · Data Privacy · Health Tech · Regulation
New data from Ubie Health reveals that 75% of users overshare personal health information with AI symptom-checkers, exposing sensitive details and raising significant privacy and regulatory concerns for AI health platforms, which often lack adequate safeguards.
The article, authored by Jesse Zucker, highlights that 44% of users include real-life event context, and 28% upload photos or documents with hidden metadata, making identity triangulation possible. Experts, including Kota Kubo, CEO of Ubie Health, emphasize that users often treat AI bots as private diaries, unaware of data retention policies or potential monetization.
A 2023 review identified risks such as data leakage and model training on user inputs, noting that only one in four leading AI tools meet GDPR or HIPAA standards. The article advises users to strip personal identifiers, generalize real-world details, remove metadata from uploads, and select AI tools with transparent privacy policies, end-to-end encryption, and data-retention limits to mitigate these substantial privacy risks.