How To Handle Sensitive Information in your next AI Project It's crucial to handle sensitive user information with care. Whether it's personal data, financial details, or health information, understanding how to protect and manage it is essential to maintain trust and comply with privacy regulations. Here are 5 best practices to follow: 1. Identify and Classify Sensitive Data Start by identifying the types of sensitive data your application handles, such as personally identifiable information (PII), sensitive personal information (SPI), and confidential data. Understand the specific legal requirements and privacy regulations that apply, such as GDPR or the California Consumer Privacy Act. 2. Minimize Data Exposure Only share the necessary information with AI endpoints. For PII, such as names, addresses, or social security numbers, consider redacting this information before making API calls, especially if the data could be linked to sensitive applications, like healthcare or financial services. 3. Avoid Sharing Highly Sensitive Information Never pass sensitive personal information, such as credit card numbers, passwords, or bank account details, through AI endpoints. Instead, use secure, dedicated channels for handling and processing such data to avoid unintended exposure or misuse. 4. Implement Data Anonymization When dealing with confidential information, like health conditions or legal matters, ensure that the data cannot be traced back to an individual. Anonymize the data before using it with AI services to maintain user privacy and comply with legal standards. 5. Regularly Review and Update Privacy Practices Data privacy is a dynamic field with evolving laws and best practices. To ensure continued compliance and protection of user data, regularly review your data handling processes, stay updated on relevant regulations, and adjust your practices as needed. Remember, safeguarding sensitive information is not just about compliance — it's about earning and keeping the trust of your users.
Protecting Personal Health Information (PHI)
Explore top LinkedIn content from expert professionals.
-
-
Let’s say you’re a newly hired Third-Party Risk Analyst at a mid-sized healthcare company. During your onboarding, you realize that while they have dozens of vendors handling sensitive patient data (think billing companies, cloud services, and telehealth providers), they have no formal third-party risk assessments documented. First, you would start by building a basic Third-Party Inventory. You’d gather a list of all vendors, what services they provide, and what kind of data they have access to. You would focus on vendors that touch patient records (Protected Health Information, or PHI) because HIPAA requires stricter handling for that kind of data. Next, you would create a simple vendor risk rating system. For example, any vendor handling PHI = High Risk, vendors with financial data = Medium Risk, vendors with only public data = Low Risk. You’d organize vendors into those categories so leadership can prioritize attention. Then, you would prepare a basic Due Diligence Questionnaire to send out. It would ask things like: • Do you encrypt PHI data in transit and at rest? • Do you have a current SOC 2 report? • Have you had any breaches in the last 12 months? After collecting responses, you would review them and flag any vendors who seem high-risk (like no encryption, no audit reports, or recent breaches). You’d recommend follow-ups, like contract updates, requiring security improvements, or even switching providers if needed. Finally, you would propose setting up a recurring third-party review schedule — maybe every 6 or 12 months — so that vendor risk stays managed continuously, not just one time.
-
Agentic AI is coming for clinical workflows. But is it HIPAA-safe? 🤔 This new paper presents one of the first technical frameworks that attempts to align Agentic AI-LLMs with autonomous decision-making capabilities, with the guardrails of HIPAA. The risks: Agentic systems can access unstructured EHRs, generate documentation, or even offer diagnostic recommendations. But without granular access controls and real-time PHI monitoring, they risk violating HIPAA’s minimum necessary rule at scale. 📌 The proposed system includes: • Attribute-Based Access Control (ABAC) to enable dynamic, context-aware permissions • A hybrid PHI sanitization pipeline (regex + BERT) • Immutable audit logs for forensic compliance Early results show a > 98% F1 Score in redacting PHI from free-text notes and real-time enforcement of role-specific risk thresholds. This matters because, as LLMs shift from assistive tools to autonomous agents, we need infrastructures that enforce compliance by design, not as an afterthought. Is your AI system ready for HIPAA-grade responsibility? ________________________________________ #ai #privacy #medicine #health #healthcare
-
If you share your customer's mental health condition across the internet and in the mail, without proper disclosure and choice, this is a "betrayal" per Federal Trade Commission Chair Lina Khan and you may be banned from using health information for most advertising purposes going forward. What are your practice points from the new multi million FTC enforcements. Eye-Openers: 🔹️C-suite who direct advertising strategy can be personally implicated in an FTC action re: unfair/deceptive data sharing 🔹️Sharing information of people who "liked" a page of a mental health service provider is sharing sensitive information At issue: data collected that includes: home and email addresses, birthdates, medical and prescription histories, payment account or driver license numbers, as well as information about treatment plans, pharmacy and health insurance plans, and other personal data, such as religious or political beliefs, or sexual orientation. Privacy Side: 🔹️Attention C-Suite: FTC can come after executives for privacy violations if they control / direct/ are involved in creating or implementing the policies or provide legal guidance. 🔹️ Don't say your services are "safe, secure and discreet" or that you will keep using data confidential but you are actually sharing with third parties - that may be deemed deceptive. Even statements like "patients come first' may be problematic. 🔹️Statements like that on the website may be misleading even if the privacy notice, a few pages in, describes sharing with third parties. 🔹️Generally a bad idea to bury sharing of sensitive information in the body of the privacy notice. 🔹️ The FTC can come after companies that are subject to HIPAA and deal in PHI. 🔹️Your regular privacy notice (and actual sharing practices) can't contradict your HIPAA Notice of Privacy Practices. 🔹️Specifically taking issue with things like "email lookalike audiences"; "conversion lookalike audiences" (based on trackers in website) and "page like lookalike audiences" (based on "likes" for your pages). Security side: 🔹️Cut off your former employees' access to data 🔹️Don't send postcards revealing personal information 🔹️Beware your Single Sign On "SSO". make sure it doesn't expose confidential medical files and patient information to other patients when those users signed onto the portal nearly simultaneously. Consequences can include: 🔹️ Permanent ban from using or disclosing consumers’ personal and health information to third parties for most marketing or advertising purposes 🔹️For non-banned purposes - sharing of sensitive information is permitted only with consent 🔹️Comprehensive privacy plan 🔹️Data retention schedule Pic by rawpixel on Freepik #dataprivacy #dataprotection ##privacyFOMO
-
𝗖𝗹𝗶𝗻𝗶𝗰𝗶𝗮𝗻𝘀 𝘄𝗮𝘀𝘁𝗲 𝟲 𝗵𝗼𝘂𝗿𝘀 𝗮 𝗱𝗮𝘆 𝗱𝗶𝗴𝗴𝗶𝗻𝗴 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗱𝗮𝘁𝗮 𝘁𝗵𝗲𝘆 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗲𝗻𝘁𝗲𝗿𝗲𝗱. Here’s how to give them back time, without risking a single byte of PHI. A 𝘏𝘐𝘗𝘈𝘈-𝘴𝘢𝘧𝘦 𝘙𝘈𝘎 𝘊𝘰𝘱𝘪𝘭𝘰𝘵, your own GPT securely trained on internal PDFs, guidelines, policies, and protocols. All inside your firewall. Here’s the 7-step playbook for setting it up right: 𝟭. 𝗦𝗶𝗴𝗻 𝘁𝗵𝗲 𝗕𝗔𝗔 → Azure: Portal > Compliance Manager This formally designates Microsoft as a HIPAA Business Associate, responsible for safeguarding PHI under the agreement. 𝟮. 𝗦𝗽𝗶𝗻 𝘂𝗽 𝗮 𝗽𝗿𝗶𝘃𝗮𝘁𝗲 𝗩𝗡𝗲𝘁 → Disable public IP access This isolates the Copilot so there are no leaks and lateral risk. 𝟯. 𝗨𝘀𝗲 𝗔𝘇𝘂𝗿𝗲 𝗔𝗜 𝗦𝗲𝗮𝗿𝗰𝗵 + 𝗩𝗲𝗰𝘁𝗼𝗿 𝗦𝘁𝗼𝗿𝗲 → Run vectorize() locally All your embeddings stay inside the network. 𝟰. 𝗗𝗲𝗽𝗹𝗼𝘆 𝗚𝗣𝗧-𝟰𝗼 (𝗼𝗿 𝗖𝗹𝗮𝘂𝗱𝗲) 𝗶𝗻 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗿𝗲𝗴𝗶𝗼𝗻 → Example: East US Healthcare Zone Keeps data from crossing regional or compliance boundaries. 𝟱. 𝗥𝘂𝗻 𝗮 𝗱𝗲-𝗜𝗗 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗳𝗶𝗿𝘀𝘁 → Use Presidio or Comprehend Medical Strips names, dates, and identifiers before data hits the vector DB. 𝟲. 𝗚𝗿𝗼𝘂𝗻𝗱 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗺𝗽𝘁 𝘁𝗲𝗺𝗽𝗹𝗮𝘁𝗲 → Use structured prompt logic like: “𝘈𝘯𝘴𝘸𝘦𝘳 𝘣𝘢𝘴𝘦𝘥 𝘰𝘯𝘭𝘺 𝘰𝘯 𝘵𝘩𝘦 𝘱𝘳𝘰𝘷𝘪𝘥𝘦𝘥 𝘥𝘰𝘤𝘶𝘮𝘦𝘯𝘵𝘴. 𝘐𝘧 𝘶𝘯𝘴𝘶𝘳𝘦, 𝘳𝘦𝘴𝘱𝘰𝘯𝘥 ‘𝘐 𝘥𝘰𝘯’𝘵 𝘬𝘯𝘰𝘸.’ 𝘈𝘭𝘸𝘢𝘺𝘴 𝘤𝘪𝘵𝘦 𝘵𝘩𝘦 𝘥𝘰𝘤𝘶𝘮𝘦𝘯𝘵 𝘴𝘰𝘶𝘳𝘤𝘦.” This lowers hallucination risk and boosts clinical trust. 𝟳. 𝗔𝘂𝗱𝗶𝘁 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴 → Rotate keys, log PIT20 tags, monitor access HIPAA isn’t one-and-done: ongoing oversight matters. 🎯 This setup gives clinicians secure, instant answers without compromising compliance and waiting on IT bottlenecks. Tag your IT lead. Save this post. Share it in Slack. If clinicians are asking for GPT, this is the safest “yes” you can give them. #DSO #NewPatients #AppointmentScheduler
-
If you're trying to build an application that is #HIPAA compliant, you have to make sure your #development team considers the non-functional requirements mandated by the HIPAA Security Rule. There are three key things your development team specifically needs to address: Encryption: The solution must encrypt all PHI data at rest (including where it may appear in databases, backups, logs) and in transit (HTTPS/TLS) using strong algorithms like AES-256. Access Controls: The solution must identify and limit individuals accessing PHI, so implement role-based access with least privilege, enforce strong password policies and multi-factor authentication, and automate session timeouts for inactive users. Audit Trails: The solution has to track all user activity involving PHI in tamper-proof audit logs, which need to capture important details like the identified user, IP addresses, timestamps, actions taken, and accessed/modified records. There is much more involved in achieving HIPAA compliance! However, when we're designing an application for healthcare, these are the baseline requirements our development team starts with.
-
QUICK RECAP on Monday's new HHS guidance-- As an update of the 2022 bulletin this doesn't change a ton, just clarifies that you are required to follow all HIPAA compliance regulations when sharing Personal Health Information (PHI) with tracking technology vendors. The old bulletin already said this but the update goes into a bit more detail about what counts as PHI -- which can include IP address & geolocation on regulated sites even if the webpage doesn't require a login and there's no specific treatment/billing info. The update also nukes a lot of the edge cases where people might have argued, "But I'm sure it doesn't count if the tracker deletes the PHI after collecting it, or if there's a pop-up on the website asking the site visitor for tracking permission." None of that stuff gets around the HIPAA compliance rules for sharing PHI. So what are those rules? Basically, all tracking technology vendors must have signed a Business Associate Agreement (BAA) to ensure compliance with HIPAA privacy regulations. And if they won't, then you can establish a BAA with a Customer Data Platform (CDP), who will in turn set one up with the tracking vendor. But without the BAA, you pretty much can't disclose PHI without the express HIPAA-compliant authorization from the individuals whose info is being tracked -- and no, agreeing to the website banner's tracking cookies doesn't count. So if you've been putting off updating your systems, this is a reminder to get your ass in gear and move to some modern HIPAA-compliant tech like BAA-signing web analytics and CDPs. Go find yourself a BAAdass agency too while you're at it. If you do, things will be fine. If you don't, things will be fined.
-
OCR updated its guidance on the use of tracking technologies yesterday, and my thoughts are apparently too long for a post, so here is a rare LinkedIn article from me. If this overview is too long, here's what I found most interesting: 1) Whether the information collected from a tracking technology is PHI depends in part on the intent of the individual (based on one example - tracking technologies on an unauthenticated page of a hospital website that outlines healthcare services may collect PHI if the website visitor is looking for a healthcare provider but won't collect PHI if the visitor is a student doing research). The problem, of course, is that neither the hospital nor the tracking technology vendor will know the difference. They may make inferences, but they don't know. Companies will need to evaluate their risk based on the data collected, not the potential intent of the visitor (unless their website is structured in a way that makes it clear). 2) Tracking technology vendors that collect PHI (based on the nature of the data they collect) must sign a BAA, or the regulated entity must get HIPAA authorization (which can't be obtained through a website banner). It is insufficient for a tracking technology vendor itself to de-identify PHI (in lieu of authorization or a BAA), BUT OCR states that an intermediary can be used to de-identify data before it is shared with the tracking technology provider (unclear how that will work from a tech perspective). 3) OCR clarifies that signing an agreement with BAA-like restrictions will not make a company a business associate (like the controller/processor distinction—you are what you are, and the contract doesn't change that). Bottom line—This is an enforcement priority for OCR. The best time to look at this was a few years ago, but the next best time is now. Both sides (vendors and regulated entities) need to understand what information is being collected by tracking technologies, whether it is covered by HIPAA, and then act based on that analysis. Companies that fall outside of HIPAA aren't off the hook—the FTC is watching.