Smart mattresses know when you are having sex. Apple Health can predict when you are going to menstruate. Alexa knows when you are congested. And most of us are okay with that because we live in a brave new world; in exchange for valuable services, we have been engaging in online data-sharing behavior for years. When Amazon and Zappos first launched, it seemed weird to share credit card info online to buy a book or pair of shoes. Now, it’s commonplace, and people are sharing even more personal data for personalized experiences and products as retailers and brands are adopting new technologies to create hyper-personalized experiences for consumers.
Hyper-Personalization Can’t Exist without Privacy
While consumer expectations for extremely personalized experiences have dramatically grown in recent years, privacy awareness has grown and tech companies and regulators are limiting data access, causing marketers to seek explicit user-permissioned data instead of inferred data. Inferred and passively collected data, which is referred to as first-party data (FPD) if collected directly, or third-party data if bought from another source, includes user behaviors such as digital interactions and purchase history. It is passively collected by cookies and often used in shady and unclear ways. Accordingly, in March,Google announced that it will no longer support third-party cookies and will stop tracking an individual’s browser history altogether through its Chrome browser. After Apple released its iOS 14.5 update in April, which includes the ability to stop apps from tracking your activity for ad targeting purposes,only 4% of iOS users in the US opted to let apps track them. Meanwhile, Colorado has now joined California and Virginia to become the third US state to pass comprehensive data privacy legislation with itsColorado Privacy Act, which was signed into law in July.
Explicit user-permissioned data has grown so ubiquitous now that it has been coined “zero Party Data” (ZPD). According to Forrester, ZPD includes purchase intentions, personal context, communication preferences, quizzes, and how the individual wants the brand to recognize them. What makes ZPD special is that when you compare it to FPD, the data evolves from inferred to proven—and it’s 100% consented to and privacy compliant. Ultimately, ZPD gives consumers control of the data they share and who they share it with, enabling greater transparency and more effective personalization. ZPD establishes consumer loyalty by allowing businesses to differentiate themselves from competitors while catering to consumers more successfully.
Data: This Time, It’s Personal
“Over 80% of consumers expect personalization and prefer to conduct business with retailers that provide it,” and personalization can even be called a “hygiene factor: customers take it for granted, but if a retailer gets it wrong, customers may depart for a competitor,” reports McKinsey. Hence, the latest evolution of personalization extends to the entire customer experience—and the ZPD includes far more personal data. Retailers, brands, and wellness companies now include the consumer in the dialogue while leveraging data to create one-to-one personalization.
ZPD gives clear insights of consumer needs to brands and retailers. In categories like health, skincare, and nutrition, ZPD must go beyond the consumer’s answers to a quiz, and include results from diagnostics for a holistic analysis of a consumer’s biology, lifestyle, and environmental data. Since there is an understanding that the consumer will get real value, consumers are sharing very personal data, ranging from their geolocation, Apple Health app info, biowearables, facial imaging, DNA tests, hormone tests, and microbiome tests for personalized products and experiences. Some companies that use very personal ZPD include:
Biowearables
Blood, DNA, Microbiome Tests
Facial Imaging
The Value of Individual Data and Aggregate Data
Many experts argue that our individual data is simply not that valuable. Further, aggregate data is not that valuable since it gains value only after tremendous investment. In his recent article in The Information, Tim O’Reilly, founder and CEO of O’Reilly Media, states, “Data is not the new oil. It is the new sand.” Although silicon is derived from ubiquitous sand, silicon is only made valuable by the novel industrial-scale processes that turn silicon into computer chips, requiring years of research and development, immense capital investment in manufacturing processes, and scientific breakthroughs to design and use chips in novel ways. This issue holds true for data too—there’s the cost of development, data collection, data hygiene, and data analysis. A large quantity of data doesn’t guarantee insight.
According to O’Reilly, the crux of the ownership issue with user data isn’t about the data’s value—it’s about the control of the user data. Does the company that collects the data have the right to control the data and how it is used? Is it required to get consent again if it expands the scope of its use (e.g., starting with personalized sleep and expanding to sex)? Is it obliged to use the data it acquires from users only for their benefit or can it be used against them? Can the data be resold without any monetary benefit to the user from whom it was collected?
The Threat of Privacy Loss Isn’t the Real Issue
Since the benefits of shared data are tremendous, people are willing to convey their privacy in exchange for those benefits. People with life-threatening illnesses share details of their symptoms and treatments on sites such as Smart Patients and PatientsLikeMe in hope that it will lead to a cure. However, they refrain from sharing data when they fear it may be used to discriminate against them, e.g., being denied insurance coverage or the risk of losing a job.
O’Reilly argues that “discrimination, not loss of privacy, is the actual harm that should be regulated. Treating loss of privacy as the harm has led the U.S. healthcare system to treat patient data as if it were toxic waste, impeding information sharing and slowing research.” Instead of focusing on privacy as a scare tactic, we need to address the actual harms of sharing data, such as job discrimination due to unfettered access to personal information.
Regulators are addressing these privacy and discrimination issues with recent laws such as the Genetic Information Nondiscrimination Act (GINA), which protects individuals against discrimination based on their genetic information in health coverage and in employment, and theGeneral Data Protection Regulation (GDPR), a new framework for European data protection and privacy in the European Union (EU) and the European Economic Area (EEA). Its scope also applies to the transfer of personal data outside the EU and EEA areas. It enhances individuals' control over their personal data while facilitating the regulatory environment for international business.
While regulations are generally supportive, companies need to take the lead to implement measures and systems to ensure confidentiality, integrity, and access to data, which include independent security certification and audit, encryption, and limited access to essential personnel. Many companies are using the utility-based cloud services of AWS or Microsoft to process, store, and transmit protected data including personal health information (PHI). AWS enables covered entities and their business associates subject to the U.S. Health Insurance Portability and Accountability Act (HIPAA) to use the secure AWS environment to process, maintain, and store protected health information. HIPAA, and theHealth Information Technology for Economic and Clinical Health Act (HITECH), which expanded the HIPAA rules, established a set of federal standards intended to protect the security and privacy of PHI, individual rights, and administrative responsibilities. Since HIPAA certification for a cloud service provider doesn’t exist,AWS aligns its HIPAA risk management program with FedRAMP and NIST 800-523, which are higher security standards that align with the HIPAA Security Rule.
How Companies Are Overcoming Privacy Concerns
A few companies taking the lead to implement measures and systems to ensure confidentiality, integrity, and access to data include 23andMe, Adyn, and MIME.
23andMe
23andMe’s data repository is a gold mine for drug development since it has the genetic and health data of approximately 12 million people, andaround 80 percent of those people have agreed to let their anonymized data be used for research, including drug research. 23andMehas shared user data with GlaxoSmithKline for use in developing drugs. In 2018, FTC investigated 23andMe for its privacy practices, but the inquiry was closed in 2019 after the FTC found it followed best practices for data privacy.
A spokesperson from 23andMe told the Guardian that “all its DNA samples were processed in the US and it did not share customer data with any third parties without the separate, explicit consent of the customer. Customers could opt to have their DNA sample destroyed or stored at the 23andMe lab, and they could close their accounts at any time.” Although 23andMe completed a merger with Richard Branson’sVG Acquisition Corp. and is now publicly traded on NASDAQ, “No customer data is shared with Virgin or anyone else as part of the proposed transaction,” the spokesperson said. (The company’sprivacy statement notes that in the event of a merger, customer data “would remain subject to the promises made in any pre-existing privacy statement.”)
23andMe only shares user data outside the company through opt-in agreements, and 80% of users opt into research. Moreover,the data is only shared when anonymized and in aggregate, unless customers separately agree to have their anonymized data shared individually.
Adyn
Adyn is a new health tech company that uses genetic screening to match women with personalized contraception that minimizes the risk of side effects. According to Elizabeth Ruzzo, a geneticist and the founder and CEO of Adyn, “Women have been forced to use trial-and-error for decades, but thanks to Adyn’s precision medicine approach, we no longer have to suffer side effects. This new standard of care empowers both doctors and patients to make the best choice.”
“We know that the data we collect is both personal and powerful, and we take responsibility to protect it. If we found mission-aligned scientists, we might consider partnering with them only if we: A) have your explicit consent, and B) we believe sharing our anonymous aggregate (population-level) dataset will advance science/medicine,” said Ruzzo. Adyn is taking the same approach to privacy as 23andMe. Adyn will not sell individual-level data; it requests consent for use of aggregate data, and it will use aggregate data internally to improve its ML.
MIME
A company working to solve the anonymization of consumer identities in facial images is MIME. It uses consumer photos to help consumers find the perfect foundation, concealer, or beauty product match based on skin tone at a high accuracy. “With increasing demands of GDPR, CCPA, and other privacy guidelines being set—we strive to go beyond the bare minimum when thinking about consumer privacy. Since our core technology relies on selfie analysis, we had to ensure we could fully anonymize the customer’s image to be GDPR compliant—while also ensuring our AI platform could continue to learn and get smarter every day,” said MIME founder & CEO Chris Merkle.
Historically, a customer’s photo could be linked to their email, IP address, or other unique identifier, but even after removing that link, MIME wanted to ensure the customer photo could be truly anonymous. MIME’s patent-pending technique, invented by Merkle and lead data scientist Rafael Toledo, involves extracting the customer’s face from the submitted photo, extracting facial color information into clusters to create new images, randomizing the rotation of each new image, adding intelligent noise overlays to each new photo, then encrypting each photo. Only MIME’s neural network understands how these encrypted files work together to produce a result like a skin tone recommendation.
Merkle says, “This technology is not only unique for the beauty industry but where any data-science or AI-driven company has to anonymize their customer photos, while being able to retain that data for improving their products. Our patent-pending technique is yet another safeguard for customer privacy should a data breach occur at any company where consumer photos are stored.”
Conclusion
Personalization isn’t a luxury anymore. Food, supplements, sleep, medicine, skincare, and fitness are already personalized. The benefits and massive opportunities of personalization are undeniable—consumers can navigate their wellness and health with a preventative rather than a reactive approach, and typically underserved consumers can participate in research and product development. The benefits will impel businesses to preserve privacy. Hyper-personalized offerings will launch as businesses try to differentiate themselves and cater to consumers more effectively. Yes, data breaches will continue, but security services will as well, and we will get over it when push comes to shove because we have happily grown accustomed to personalization. When your smart mattress company gets hacked, your data will be anonymized and the company will be on top of it, and hopefully, you’ll be having the best sleep of your life.