Fenwick Privacy Bulletin - Fall 2016

In This Issue

Privacy Shield – An Early Reflection

EU law generally prohibits the transfer of personal data from the European Economic Area to the U.S., unless the transfer is made in accordance with an authorized data transfer mechanism or otherwise falls within an exception. The Safe Harbor framework was one of these authorized data transfer mechanisms, but it was declared invalid by the Court of Justice of the European Union last October. – Jonathan S. Millard and Hanley Chew

Standing in Privacy Cases: Key Decisions to Know

In the past two months, the U.S. Court of Appeals for the Sixth and Eighth Circuits have issued significant decisions on the issue of standing in privacy cases. Taken together, these decisions are seemingly inconsistent, offering conflicting standards for what constitutes a cognizable injury sufficient to confer standing at the pleading stage in privacy cases. – Hanley Chew and Tyler G. Newby

The Cybersecurity Information Sharing Act of 2015: An Overview

On December 18, 2015, President Barack Obama signed into law the Cybersecurity Information Sharing Act of 2015 (CISA), which establishes a voluntary system for sharing cyber threat indicators and defensive measures for a cybersecurity purpose between federal entities and nonfederal entities. – Hanley Chew and Tyler G. Newby

Employer Wellness Programs – New Rules on Collection of Confidential Health Information

Earlier this year, the Equal Employment Opportunities Commission (EEOC) issued two rules providing additional clarity for employers as to whether they can collect confidential health information from their employees and spouses in connection with employer-sponsored wellness programs – Anna Suh

Are IP Addresses PII? Why Businesses Should Be Cautious About IP Addresses

We often hear the following phrase from clients: “We are not collecting PII, we only collect IP addresses.” Companies may be surprised to hear that the law does not always support that view, and businesses must be cautious in their assessment in this area, as multiple laws govern the use of personally identifiable information (PII), which are not always consistent with regard to the classification of Internet Protocol (IP) addresses. – Clay Venetis, Hanley Chew and Jonathan S. Millard

Privacy Shield – An Early Reflection

By Jonathan S. Millard and Hanley Chew

A Brief Recap

EU law generally prohibits the transfer of personal data from the European Economic Area to the U.S., unless the transfer is made in accordance with an authorized data transfer mechanism or otherwise falls within an exception. The Safe Harbor framework was one of these authorized data transfer mechanisms, but it was declared invalid by the Court of Justice of the European Union last October. Following months of negotiation between the U.S. and the EU, as of August 1, 2016, the EU-U.S. Privacy Shield was launched, and companies can now self-certify with the Department of Commerce and join the program.

We therefore take a look at the current “state of play” with regard to the Privacy Shield and some of the decisions facing those considering whether to self-certify.

Why Rush to Certify?

Given that companies were managing to transfer personal data from the EU to the U.S. prior to August 1, 2016, why, then, should companies rush into Privacy Shield certification?

Nine-Month Grace Period: To incentivize use of the new Privacy Shield, if a company filed its self-certification by September 30, 2016, it was granted a nine-month grace period (from the date of certification) to conform its contracts with third-parties to the new onward transfer requirements under the Privacy Shield. This onward transfer requirement essentially means that certifying companies have to ensure that any subprocessors of the personal data (entities to which the personal data is transferred) have Privacy Shield-type safeguards in place. This grace period was seen as a big incentive to certify early.

Risk of Invalidation of Model Clauses: On May 25, 2016, the Irish DPA referred a matter to the Court of Justice of the European Union, effectively challenging the legal validity of the Model Clauses (the other key authorized data transfer mechanism) on similar grounds as those on which the Safe Harbor was challenged. This left the market wondering whether they could continue to rely on Model Clauses, potentially making the Privacy Shield a more attractive alternative.

Some Added Certainty from the Regulators: Following the uncertainty created by the Irish DPA referral (above), on July 26, 2016, Isabelle Falque-Pierrotin, the Chairwoman of the Article 29 Working Party of data protection regulators, announced that EU data protection regulators would not challenge the adequacy of the Privacy Shield until late 2017, providing some level of certainty to the Privacy Shield route – at least in the short term.

A Step Closer to GDPR Compliance: A thorough Privacy Shield gap analysis and compliance program has the benefit of moving a company a step toward the compliance requirements of the General Data Protection Regulation (GDPR), which will become effective in mid-2018. The GDPR extends the territorial reach of EU Privacy Law to U.S. companies providing services to the European market or profiling EU citizens, so Privacy Shield compliance may be seen by some as Phase 1 of a multiphased approach to GDPR compliance.

Reasons for Possible Reluctance

The Privacy Shield has certainly received mixed reviews from the market and regulators alike, so not everybody has been so keen to certify. Here’s why:

Early Adopters = Increased Scrutiny: The benefits of the nine-month grace period have to be weighed against the risks of the increased scrutiny that early adopters will face. There will be pressure on regulators and governments to ensure that the Privacy Shield is seen as a robust mechanism, protecting the fundamental freedoms of EU citizens. This will bring with it heightened sensitivities around compliance and transparency, meaning that those certifying early will likely be closely scrutinized. Now that the ability to take advantage of the grace period is over, it will be interesting to see how demand for certification is impacted.

“Better the Devil You Know:” Since the Safe Harbor invalidation, many companies have been scrambling to implement a Model Clause program, which would have required a level of analysis, assessment and implementation. Implementing a Privacy Shield compliance program will now require changes. Given that many companies have become comfortable with the risk profile and obligations of Model Clauses, they may opt to retain the Model Clause approach, and adopt a wait and see position concerning whether the regulators, the courts or their customers force their hand to certify under the Privacy Shield. A sensible approach would seem to be to have one eye on each regime.

Cost of Compliance Program: A criticism of the Safe Harbor was that self-certification was often not taken seriously enough by those certifying and those supervising. Under the new framework, the consequences of noncompliance are real, and warrant appropriate attention and resources. Some will simply wish to postpone this until the next quarter, calendar year or revenue cycle, or maybe even until early examples of strong enforcement justify the legal spend.

So Who Has Certified?

At the time of writing, over 200 U.S. companies have already self-certified to the Privacy Shield, with another 300 companies being reviewed. Sophisticated companies, such as Oracle, Microsoft, Google and Salesforce, are now among those that have been certified, along with smaller enterprises. In contrast, more than 4,000 companies had been certified under the Safe Harbor regime. The International Association of Privacy Practitioners noted in their 2016 Annual Privacy Governance Report that “many companies remain wary of Privacy Shield and are still weighing other transfer compliance options. This is especially true of small companies for whom GDPR compliance presents a formidable challenge.”

It therefore appears that a state of flux will continue, at least in the short term.



Standing in Privacy Cases: Key Decisions to Know

By Hanley Chew and Tyler G. Newby

In the past two months, the U.S. Court of Appeals for the Sixth and Eighth Circuits have issued significant decisions on the issue of standing in privacy cases. Taken together, these decisions are seemingly inconsistent, offering conflicting standards for what constitutes a cognizable injury sufficient to confer standing at the pleading stage in privacy cases.

Carlsen v. GameStop, Inc.

In Carlsen v. GameStop, Inc., No. 15-2453 (Eighth Cir. Aug. 16, 2016), the Eighth Circuit expanded standing to bring claims for violations of a privacy policy. Matthew Carlsen filed a class action suit for breach of contract, unjust enrichment and other state claims, alleging that GameStop violated its privacy policy, which provided that, with certain exceptions, GameStop did not share personal information with third parties. Specifically, Carlsen alleged that when users logged onto the Game Informer website, which was controlled by GameStop, through their Facebook accounts, the website transmitted their Facebook IDs and Game Informer browsing histories to Facebook if they did not log out of their Facebook accounts. Carlsen alleged that, had he known about the sharing of information, he either would not have paid as much for his subscription or would have refrained from accessing the online content on the Game Informer website for which he paid.

The district court dismissed Carlsen’s complaint for lack of standing, holding that Carlsen had failed to allege that he suffered a cognizable injury. The Eighth Circuit affirmed the dismissal, not for lack of standing, but for failing to state a claim. In addressing the issue of standing, the court found that Carlsen’s allegation that he suffered damages “in the form of devaluation of his Game Informer subscription in an amount equal to the difference between the value of the subscription that he paid for and the value of the subscription that he received, i.e., a subscription with compromised privacy protection,” constituted an “actual” injury. Carlsen, at *7. In addition, the court found that “Carlsen’s allegation that he did not receive data protection set forth in GameStop’s policies” was sufficient to establish standing for his unjust enrichment claim. Id.

Braitberg v. Charter Communications, Inc.

A few weeks later, the Eighth Circuit issued a seemingly contradictory decision in Braitberg v. Charter Communications, Inc., No. 14-1737 (Eighth Cir. Sept. 8, 2016), which limited standing to bring claims for the retention of personal information. In Braitberg, Alex Braitberg brought a class action lawsuit alleging that Charter Communications, Inc. (Charter) had violated the Cable Communications Protection Act (CCPA) by retaining the personally identifiable information of its customers after they had cancelled their subscriptions and after retention of that information was no longer necessary for Charter to provide services, collect payments, or satisfy any tax, accounting or legal obligations. In his complaint, Braitberg alleged that he and the other proposed class members had suffered two injuries. First, he alleged Charter’s retention of subscribers’ personal information directly invaded their federally protected privacy rights under the CCPA. Second, he alleged the class had not received the full value of the services that they had purchased from Charter. Specifically, Braitberg alleged that the class placed a monetary value on controlling and protecting their personal information and that when they subscribed to cable service, they paid for Charter to destroy their information when Charter no longer needed it. Charter’s failure to do so deprived them of the service for which they had allegedly paid.

The district court dismissed the complaint upon Charter’s motion challenging the plaintiff’s Article III standing. The Eighth Circuit affirmed the district court’s dismissal, finding that Braitberg’s complaint had fallen short of the Spokeo v. Robbins standard by failing to allege a concrete injury arising from Charter Communications’ alleged retention of the personal information. See Braitberg, No. 14-1737, at 8. Instead, Braitberg had only alleged “a bare procedural violation, divorced from any concrete harm.” See id. (citation omitted). The court noted that Braitberg had failed to allege that the personal information that Charter had retained, allegedly in violation of the CCPA, had been disclosed to or accessed by a third party or that Charter had even used the information itself during the disputed period. See id. The Eighth Circuit observed that while there was an established common law tradition recognizing injury based on the invasion of privacy rights, there was no such tradition for the retention of personal information that was lawfully obtained. See id. The court similarly rejected Braitberg’s economic injury argument, holding that “Braitberg has not adequately alleged that there was any effect on the value of the service that he [and the other class members] purchased from Charter.” Id., at 9.

Galaria, et al. v. Nationwide Mutual Insurance Co.

Only four days after Braitberg, the Sixth Circuit issued its decision in Galaria, et al. v. Nationwide Mutual Insurance Co., No. 15-3386/3387 (Sixth Cir., Sept. 12, 2016), which expanded standing in data breach cases. In Galaria, Mohammad Galaria and Alex Hancock filed class action complaints against Nationwide Mutual Insurance Co. (Nationwide) for negligence and other state claims after hackers broke into Nationwide’s network and stole the personal information of 11 million Nationwide customers. The complaints alleged that the Nationwide data breach created an “imminent, immediate and continuing increased risk” that the plaintiffs and the other class members would be the victims of identity theft and that they had suffered and would continue to suffer both “financial and temporal” costs, such as having to purchase credit reporting and monitoring services, instituting and/or removing credit freezes, and closing or modifying financial accounts.

The district court dismissed the complaints, concluding that the plaintiffs had not alleged an injury sufficient to confer Article III standing. The Sixth Circuit reversed the dismissal, concluding that the plaintiffs’ allegations that the theft of their personal information subjected them to a heightened risk of identity theft and caused them to incur mitigation costs, such as credit monitoring, was sufficient to establish standing at the pleading stage. Citing Clapper v. Amnesty Int’l USA, 133 S. Ct. 1138, 1147, 1150 n.5 (2013), the Sixth Circuit explained that “threatened injury must be certainly impending to constitute injury in fact,” and “standing [may be] based on a ‘substantial risk’ that the harm will occur, which may prompt plaintiffs to reasonably incur costs to mitigate or avoid the harm, even where it is not ‘literally certain the harms they identify will come about.’”Galaria, Nos. 15-3386/3387, at 6. Turning to the allegations of the complaints, the court found that “[w]here a data breach targets personal information, a reasonable inference can be drawn that the hacker will use the victims’ data for…fraudulent purposes[,]” and “although it might not be ‘literally certain’ that Plaintiffs’ data will be misused…, there is a sufficiently substantial risk of harm that incurring mitigation costs is reasonable." Id. at 6-7.

The Impact

At first glance, these three cases appear incompatible. However, upon closer inspection, they adhere to the proposition that injuries traditionally recognized by the common law are sufficient to confer Article III standing at the pleading stage, while novel theories of injury are less likely to be recognized. Carlsen was premised on a breach of contract claim in which the plaintiff alleged he did not receive the benefit of the bargain for which he had paid. The common law has long recognized that contract damages are concrete injuries for the purposes of establishing standing. Galaria involved circumstances involving criminal activity that led the court to conclude that the plaintiffs faced an imminent threat of identity theft and fraud, which would constitute concrete harm to those plaintiffs. In contrast, the common law does not recognize the existence of an injury based upon the retention of personal information that has been legally obtained, as in Braitberg.



The Cybersecurity Information Sharing Act of 2015: An Overview

Hanley Chew and Tyler G. Newby

On December 18, 2015, President Barack Obama signed into law the Cybersecurity Information Sharing Act of 2015 (CISA), which establishes a voluntary system for sharing cyber threat indicators1and defensive measures2for a cybersecurity purpose3between federal entities4and nonfederal entities.5Examples of cyber threat indicators include web server logs showing that a particular IP address is testing a company’s vulnerabilities on its website; techniques that permit unauthorized access to a control system; malware found on the company’s network; domain names or IP addresses associated with botnet command and control servers; discovered vulnerabilities; and IP addresses sending malicious traffic in connection with a distributed denial of service attack. Examples of defensive measures include information about security devices that protects a company’s network, such as the configuration of a firewall mechanism, and techniques to protect against phishing campaigns.

CISA provides a safe harbor from civil, regulatory and antitrust liability for nonfederal entities that share this information in accordance with its provisions. However, CISA’s safe harbor protection does not shield nonfederal entities from potential liability for data breaches or other cybersecurity incidents. It only shields nonfederal entities from liability for their act of sharing or receiving such information.

CISA only extends civil liability protection for sharing cyber threat indicators and defensive measures to private entities,6but only if the sharing was done in accordance with CISA (i.e., through DHS-certified channels and mechanisms). Where this sharing is done in a manner not consistent with CISA, private entities do not receive liability protection, but are still eligible for the following CISA protections and exemptions:

  1. federal antitrust exemptions;
  2. exemptions from federal and state disclosure laws, such as federal, state, tribal or local government freedom of information laws, open government laws, open records laws or sunshine laws;
  3. exemptions from certain state and federal regulatory uses, which prevent any federal, state, tribal or local government from bringing an enforcement action based on the sharing, but not the development or implementation of a regulation;
  4. no waiver of any applicable protection or privilege for shared material;
  5. treatment of shared material as commercial, financial and proprietary information when so designated by the sharing entity; and
  6. ex parte communications waiver, which prevents the sharing of cyber threat indicators and defensive measures with the federal government under CISA from being subject to the rules of any federal agency or department or judicial doctrine regarding communications with a decision-making official.7

CISA limits the government’s use of the information shared under its provisions and the type of data to which the government has access. The government may only use the information provided under CISA for either a cybersecurity purpose; to identify cybersecurity threats or security vulnerabilities; to respond to, prevent or mitigate a specific threat of death or serious bodily and economic harm; or to investigate or prosecute specific criminal offenses. It may not use the information to bring an enforcement action.

In addition, CISA requires nonfederal entities that share cyber threat indicators and defensive measures to remove all information that the sharing entity “knows at the time of sharing” to be the personal information of a specific individual or information that identifies a specific individual unrelated to the cybersecurity threat.8The guidance provides a spear-phishing email as an illustration of what should be done. The personal information of the sender of the spear-fishing email, such as the name, email address and content of the email, should be shared, but the personal information of the intended victims should not.

CISA centralizes cybersecurity information sharing between federal entities and nonfederal entities in the Department of Homeland Security (DHS), specifically the National Cybersecurity and Communications Integration Center (NCCIC). On February 16, 2016, the DHS published interim guidance and policies for implementing CISA. On June 15, 2016, the DHS and Department of Justice (DOJ) released the final guidance on implementing CISA.

The final guidance for CISA contemplates the utilization of the Automated Indicator Sharing (AIS) initiative as the primary means of sharing information. The AIS initiative is an automated system that receives, processes and disseminates cyber threat indicators and defensive measures in real time with all AIS participants. The system works by having AIS participants connect to a DHS-managed system in NCCIC. A server housed at each AIS participant’s location enables participants to exchange information with the NCCIC. Once the NCCIC has reviewed a submission to determine that there is no personally identifiable information (pii), the indicators and defensive measures will be shared with all AIS participants.

In addition to pii, DHS has identified the types of personal information that are protected by federal and state laws and unlikely to be related to a cybersecurity threat. AIS participants should be particularly diligent about removing the following types of information from their CISA submissions:

  1. protected health information (phi), such as medical records, laboratory reports or hospital bills;
  2. human resource information within an employee’s personnel file, such as hiring decisions, performance reviews and disciplinary actions;
  3. consumer information/history related to an individual’s purchases, preferences, complaints or credit;
  4. education history, such as transcripts, or training;
  5. financial information, such as bank statements, loan information or credit reports;
  6. identifying information about property ownership, such as property purchase records or vehicle identification numbers; and
  7. identifying information about children under the age of 13 subject to the Children’s Online Privacy Protection Act.

After agreeing to the AIS’s Terms of Use, AIS participants may submit cyber threat indicators or defensive measures through one of three methods: (1) the DHS Trusted Automated eXchange of Indicator of Information (TAXII) server in the AIS Profile format; (2) a web form on the US-CERT web portal; or (3) by email to DHS.

Participants who share indicators and defensive measures through the AIS will not be identified as the source of that information unless they affirmatively consent to the disclosure of their identities. DHS will not verify the accuracy of the indicators or defensive measures submitted by participants. The AIS Terms of Use require participants to use “reasonable efforts” to ensure that any indicator or defensive measure is accurate at the time that it is submitted.9However, as the government obtains useful information concerning an indicator or defensive measure, it may assign a reputational score.

Nonfederal entities may continue to provide cyber threat indicators and defensive measures to federal entities through Information Sharing and Analysis Centers (ISACs) and Information Sharing and Analysis Organizations (ISAOs), which are private entities that share cybersecurity information with federal entities through DHS. Private entities that share this information with ISACs and ISOCs in accordance with CISA still receive all the protections and exemptions under CISA. Similarly, ISACs and ISAOs that share information with other private entities in accordance with CISA also receive the same level of protection under CISA.

The implementation of CISA is still a work in progress. It remains to be seen how this information sharing will look and how effective it will be. If they are considering participating, companies should take appropriate steps to ensure the sharing is done in a manner that is consistent with CISA’s provisions and maximizes CISA’s protections.


1CISA defines “cyber threat indicator” as:

[I]nformation that is necessary to describe or identify:

(i) malicious reconnaissance, including anomalous patterns of communications that appear to be transmitted for the purpose of gathering technical information related to a cybersecurity threat or security vulnerability;

(ii) a method of defeating a security control or exploitation of a security vulnerability;

(iii) a security vulnerability, including anomalous activity that appears to indicate the existence of a security vulnerability;

(iv) a method of causing a user with legitimate access to an information system or information that is stored on, processed by or transiting an information system to unwittingly enable the defeat of a security control or exploitation of a security vulnerability;

(v) a malicious cyber command or control;

(vi) the actual or potential harm caused by an incident, including a description of the information exfiltrated as a result of a particular cybersecurity threat;

(vii) any other attribute of a cybersecurity threat, if disclosure of such attribute is not otherwise prohibited by law; or

(viii) any combination of (i)-(vii).

Section 102(6) of CISA.

2CISA defines “defensive measures” as:

An action, device, procedure, signature, technique or other measure applied to an information system or information that is stored on, processed by or transiting an information system that detects, prevents or mitigates a known or suspected cybersecurity threat or security vulnerability; but does not include a measure that destroys, renders unusable, provides unauthorized access to or substantially harms an information system or information stored on, processed by or transiting such information system not owned by (i) the private entity operating the measure, or (ii) another entity that is unauthorized to provide consent and has provided consent to that private entity for operation of such measure.

Section 102(7) of CISA.

3CISA defines a “cybersecurity purpose” as “the purpose of protecting an information system or information that is stored on, processed by or transmitting an information system from a cybersecurity threat or security vulnerability.” Section 102(4) of CISA.

4CISA defines a “federal entity” as “a department or agency of the United Sates or any component of such department or agency.” Section 102(9).

5CISA defines a “nonfederal entity” as “any private entity, nonfederal government agency or department, or state, tribal or local government (including a political subdivision, department or component thereof).” Section 102(14) of CISA.

6CISA defines “private entities” as including “any person or private group, organization, proprietorship, partnership, trust, cooperative, corporation or other commercial or nonprofit entity, including an officer, employee or agent thereof,” and “State, tribal or local government performing utility services, such as electric, natural gas or water services.” Section 102(15)(A)-(B) of CISA.

7All of CISA’s protections, including liability protection, are available to private entities. However, other state, tribal or local government entities that are not included in CISA’s definition of “private entities” are not eligible for liability or antitrust protection, which is reserved for “private entities.” These other entities are still eligible for CISA’s other protections and exemptions.

8In addition, CISA enables private entities to monitor and adopt defensive measures for their own information systems for cybersecurity purposes. CISA explicitly shields private entities from any liability for monitoring activities conducted in a manner consistent with its requirements.

9See U.S. Dep’t of Homeland Sec. Terms of Use, § 3.2.



Employer Wellness Programs – New Rules on Collection of Confidential Health Information

By Anna Suh

EEOC Rules

Earlier this year, the Equal Employment Opportunities Commission (EEOC) issued two rules providing additional clarity for employers as to whether they can collect confidential health information from their employees and spouses in connection with employer-sponsored wellness programs:

  • The first rule permits employers to obtain health information regarding an employee’s spouse in connection with the offering of financial and other incentives as long as the information is not used to discriminate against an employee.
  • The second rule permits employers to offer limited financial and other incentives for wellness programs that are part of a group health plan and that ask questions about employees’ health or include medical examinations (such as tests to detect high blood pressure, high cholesterol or diabetes).

The Conditions

Under each of the rules, the employer wellness programs must be reasonably designed to promote health and prevent disease, and the employer and its wellness program provider must also make efforts to ensure that the medical information they collect is kept confidential. Employers should note that participation in the wellness program must be completely voluntary. In other words, employers cannot deny or limit employees’ health coverage for nonparticipation, may not retaliate against or interfere with any employee who does not to participate and may not coerce, threaten, intimidate or harass anyone into participating.

An employer should further note that, while it may use aggregate information collected by the wellness program provider to design a program based on identified health risks in the workplace, the wellness program provider may never disclose an employee’s personally identifiable health information to the employer, except as necessary to respond to a request from the employee for a reasonable accommodation needed to participate in the wellness program or as expressly permitted by law. Similarly, an employee’s personally identifiable medical information cannot be provided to the employee’s supervisors or managers and may never be used to make decisions regarding the employee’s employment.

EEOC Guidance

To provide additional guidance for employers, on June 16, 2016, the EEOC posted a sample notice to assist employers who have wellness programs in complying with their new obligations. Further discussion from the EEOC is also available online.

Conclusion

The EEOC’s recent rules demonstrate the Commission’s acknowledgement of the potential benefits of employer-sponsored wellness programs as well as the Commission’s willingness to work with employers who want to collect health data that may enable them to promote employee health and well-being and create better workplaces. Assuming that the employer and its wellness program provider comply with the obligations set forth by the EEOC and its regulations, employers now have approved means for gathering confidential health information on employees and their spouses to promote health and prevent disease.



Are IP Addresses PII? Why Businesses Should Be Cautious About IP Addresses

By Clay Venetis*, Hanley Chew and Jonathan S. Millard

We often hear the following phrase from clients: “We are not collecting PII, we only collect IP addresses.” Companies may be surprised to hear that the law does not always support that view, and businesses must be cautious in their assessment in this area, as multiple laws govern the use of personally identifiable information (PII), which are not always consistent with regard to the classification of Internet Protocol (IP) addresses.

The Courts

In In re Nickelodeon Consumer Privacy Litigation, the U.S. Court of Appeals for the Third Circuit found that IP addresses do not constitute PII under the Video Privacy Protection Act (VPPA).1Congress passed the VPPA “[t]o preserve personal privacy with respect to the rental, purchase or delivery of videotapes or similar audiovisual materials.”2In light of this legislative history, the court held that PII under the VPPA should be limited to “the kind of information that would readily permit an ordinary person to identify a specific individual’s video-watching behavior.”3IP addresses and other static digital identifiers, the court noted, were likely of little help in identifying an actual person and what videos he or she may have rented, purchased or obtained. 4

The U.S. Court of Appeals for the First Circuit takes a more expansive view of PII under the VPPA. In Yershov v. Gannett Satellite Info. Network, Inc., the court held that unique identifiers, such as cellphone identification number and GPS coordinates, that could theoretically identify a user are considered PII under the VPPA.5Yet it is unclear whether the First Circuit would find IP addresses alone to be PII, as Yershov also involved geo-location data, which makes it easier to identify an actual individual.

The majority of federal courts that have addressed the issue of whether IP addresses are PII, however, side with the Third Circuit, finding that static identifiers do not “identify” anyone because they are strings of anonymous numbers, and the possibility of matching these identifiers with other data is too hypothetical.6Application is therefore far from uniform.

Federal Statutes

Federal statutes also vary on their approach to IP addresses. The Children’s Online Privacy Protection Act (COPPA), which regulates use of online information about children under the age of 13, classifies IP addresses as "personal information," although it does not use the term "PII."7The Federal Trade Commission, which is responsible for defining personal information under COPPA, has expressly included persistent identifiers, such as IP addresses, in its definition of personal information under the statute.8The Health Insurance Portability and Accountability Act (HIPPA), which regulates health information sharing, treats IP addresses slightly differently.9HIPPA does not expressly define IP addresses as personal information, but instead states that only after IP addresses are stripped from health information can a “covered entity [under HIPPA]…determine that health information is not individually identifiable.”10Other statutes are even less clear. The Gramm-Leach-Bliley Act (GLBA), covering information held by financial institutions, defines PII as “nonpublic personal information.”11It is therefore uncertain whether nonpublic IP addresses may fall under this definition if they are tied to the information consumers provide to their financial institutions.

Position in the EU

Businesses should also note that EU laws generally consider IP addresses to be PII, or “personal data,” as defined under EU applicable law. There have been opinions from the Advocate General and the Article 29 Working Party (group of privacy regulators) seemingly supporting this position, and on October 19, 2016 the Court of Justice of the European Union ruled that dynamic IP addresses can constitute "personal data," just like static IP addresses, affording them some protection under EU law against being collected and stored by websites. In addition, under the General Data Protection Regulation (GDPR), which is set to replace the current law in May 2018, the definition of personal data includes “online identifiers,” which, according to Recital 30, includes IP addresses. It appears, therefore, that the EU is moving toward a more uniform approach in this area.

Conclusion

In the United States, guidance concerning whether IP addresses are PII is piecemeal. Decisions such as In re Nickelodeon only determine whether IP addresses are protected as PII for specific statutes in specific federal circuits. Accordingly, businesses should be cautious in making sweeping conclusions about their collection and use of IP addresses, as it varies by business type, data type and legal jurisdiction.

*Clay Venetis is a summer associate in Fenwick's litigation group.


1See 827 F.3d 262 (Third Cir., 2016).

2Id., 827 F.3d at 278.

3Id., 827 F.3d at 290.

4See id., 827 F.3d at 283.

5See 820 F.3d 482, 486 (First Cir., 2016).

6See, e.g., In re Hulu Privacy Litigation, No. C 11-03764 LB, 2014 U.S. Dist. LEXIS 59479 (N.D. Cal. Apr. 28, 2014); Robinson v. Disney Online, No. 14-cv-4146 (RA), 2015 U.S. Dist. LEXIS 142486 (S.D.N.Y. Oct. 20, 2015); Eichenberger v. ESPN, Inc., No. C14-463 TSZ, 2015 U.S. Dist. LEXIS 157106 (W.D. Wash. May 7, 2015); Ellis v. Cartoon Network, Inc., No. 14-cv-484 (TWT), 2014 U.S. Dist. LEXIS 143078 (N.D. Ga. Oct. 8, 2014), aff’d on other grounds, 803 F.3d 1251 (11th Cir. 2015); Locklear v. Dow Jones & Co., 101 F. Supp. 3d 1312 (N.D. Ga. 2015).

715 U.S.C. § 6501(8)(G)

8Children’s Online Privacy Protection Rule, 78 Fed. Reg. 3,972, 4009 (Jan. 17, 2013)

942 U.S.C. § 1320d(6)(B)(ii)

1045 CFR § 164.514(b)(2)(i)(O)

1115 U.S.C. § 6809(4)