Posted on Friday, April 11 2014 at 6:14 pm by

DOJ and FTC Release Policy Statement on Antitrust Implications of Information Sharing

hie-information-sharing-2011-08-10

Authors:

Yesterday, the Federal Trade Commission (“FTC”) and the Department of Justice (“DOJ”) – the two federal antitrust enforcement agencies – issued a joint antitrust Policy Statement regarding arrangements through which industry participants, including competitors, share cybersecurity information. The statement outlines the agencies’ enforcement policy and analytical approach to information exchanges focused on cybersecurity issues. The Policy Statement makes clear that the antitrust laws do not stand as “a roadblock to legitimate cybersecurity information sharing.” In fact, the Assistant Attorney General for DOJ’s Antitrust Division, Bill Baer, called the Policy, which gives industry a great deal of leeway to share cybersecurity information, an “antitrust no-brainer.”

The antitrust laws have traditionally treated exchanges of information between and among competitors with a fair amount of suspicion. This Policy Statement is intended to give companies, even if they compete, the green light to share much needed information to protect against cyber-attacks. And it serves as yet another indicator that the agencies consider cybersecurity as a major threat to the nation’s economy and security. From an antitrust perspective, the ability for industries to collaborate to prevent attacks is particularly relevant, given the FTC’s recent spate of aggressive investigations and enforcement actions for data breaches that expose consumers’ sensitive information, both personal and financial data, to either unintentional disclosure or theft by cybercriminals.

The agencies and the Obama administration have recognized the complexities associated with addressing these rapidly evolving threats, which require both companies and government agencies constantly to adapt to defend against new types of attacks. Given both the economic and national security concerns raised by cybersecurity, the Obama administration issued a February 2013 Executive Order on the importance of government/business collaboration on cybersecurity. That order in turn led the National Institute of Standards and Technology (“NIST”), in February 2014, to issue a voluntary cybersecurity framework.

As the FTC and DOJ note in their Policy Statement, public/private collaboration alone cannot solve the cybersecurity issues that U.S. companies face. Companies must also collaborate to share information about emerging threats, as well as to share potential solutions. In fact, the most useful information-sharing is often not from government or from other areas of the economy, but among companies in the same industry–whether energy, financial services, retail, healthcare or hospitality—which tend to be targeted by similar malware and/or the same groups of attackers. The agencies note that some formal and informal private-to-private information sharing mechanisms (like Information Sharing and Analysis Centers (“ISACs”)) do exist in certain industries, but note that some companies have expressed a reluctance to share information with their competitors due to antitrust concerns.

To allay these concerns, the Policy Statement outlines the agencies’ general policy on information exchange, as well as the specific analysis they apply to exchanges of cybersecurity information. The agencies’ approach to information sharing is spelled out in the 2000 Competitor Collaboration Guidelines and the 1996 Health Care Guidelines. Generally speaking, the agencies are primarily concerned with exchanges involving competitively sensitive information – e.g., recent, current and future pricing, cost information, and output information – because such exchanges might facilitate market allocation or price fixing among competitors.

Generally, information exchanges, without more, are not illegal per se. Instead, the antitrust agencies apply a balancing test known as the “rule of reason,” which weighs the potential procompetitive benefits associated with an exchange against the anticompetitive harm that might result. In performing this analysis, the agencies focus on the context in which the information is exchanged, the parties exchanging the information, the nature of the information exchanged, and whether the exchange generates any procompetitive benefits, like increased efficiency, lower costs, or increased output.

In the Policy Statement, the agencies walk through how the general information exchange analysis would apply to exchanges of cybersecurity information. First, the agencies note that such exchanges increase efficiency and improve information security, both of which are procompetitive. Second, the agencies address the nature of the information, explaining that cybersecurity information tends to involve highly technical information. For example, the agencies note that companies might exchange a known source IP address for a denial of service attack or a threat signature for a new type of attack. Information such as this is not the type of competitively sensitive information relating to price, cost, or output that generally concerns the agencies. Accordingly, if companies confine their sharing to technical information that does not reveal information traditionally treated as competitively sensitive, the antitrust risks should be minimal, at best.

Finally, the agencies consider any potential harm to competition caused by an exchange of cybersecurity information. Due to the fact-specific nature of this inquiry, the agencies reference DOJ’s October 2000 business review letter to the Electric Power Research Institute, Inc. (“EPRI”), in which it analyzed a proposed cybersecurity information exchange program.

EPRI is a nonprofit organization focusing on technological solutions to issues in the energy industry. It proposed exchanging information concerning best practices and information relating to vulnerabilities. In time, EPRI anticipated its members engaging in discussion or analysis of real-time cybersecurity threats. In evaluating the exchange, the DOJ noted that the information exchanged would focus on cyber and physical security, and that EPRI had said it would not allow participants to exchange either price or cost information, or vendor recommendations. Ultimately, the DOJ concluded that:

 [a]s long as the information exchanged is limited…to physical and cybersecurity issues, the proposed interdictions on price, purchasing and future product innovation discussions should be sufficient to avoid any threats to competition. Indeed, to the extent that the proposed information exchanges result in more efficient means of reducing cybersecurity costs, and such savings redound to the benefit of consumers, the information exchanges could be procompetitive in effect.

Both the new Policy Statement and the underlying EPRI business review letter should give companies comfort that collaborating on cybersecurity issues with competitors will not lead to scrutiny from the agencies. Nonetheless, counsel should be careful to remind participants in such exchanges to keep them focused on technical issues, as broadening the scope of the discussion to include vendor recommendations, pricing, or cost will create antitrust risk.

 

Posted on Wednesday, April 2 2014 at 10:33 am by

Message to the White House on Big Data’s Range of Apprehension, Privacy and Unfairness

Milton_a_Poem,_copy_C,_object_4_(Bentley_4,_Erdman_6,_Keynes_4)_detail-a

“The risk reasonably to be perceived defines the duty to be obeyed, and risk imports relation; it is to another or others within the range of apprehension.”

As you know, Helen Palsgraf was on a train platform in the early 20th Century when a man, jostled in trying to board a train while carrying a package of fireworks, dropped the package, and the explosion made scales fall on her.  In denying her rights of recovery against the railroad, Judge Cardozo’s words above greased the wheels of the dumb mechanical networks and satanic mills of the first industrial revolution (belatedly, as law tends to do), where scales could fall on Ms. Palsgraf unforeseeably.  The railroad’s range of apprehension–ability to anticipate–then was limited, but how can law handle the information networks of this industrial revolution, which expand the range of apprehension as far as we want in almost any direction we want?   If we’re smart, we don a blindfold where we don’t need and want to know, and tell people about it, like Google not letting Glass do facial recognition.  Because “intentions are more valuable than identities” in big data economics, some companies are destroying data that enable or at least facilitate reidentification and protecting data against reidentification.   As the many types of possible inappropriate/impermissible discrimination associated with big data become more and more clear, these de-identification or pseudonomization efforts may in some cases have to give way to approaches that protect people by identifying them; for this reason, we can expect some traditional contrasts between US anti-discrimination law and European privacy law to continue in the big data economy.

Again and again, it is one of the most difficult questions for electronic communication networks and websites, and ever more so in the big data economy:  To what extent do we have an obligation to know–and then to act on–everything on our servers or in our cloud, to police the content that users enter, to take action to protect something or someone, to report violations?  What obligations does the “100% auditability” of big data create?   And the obligations keep growing.   A recent FTC investigation found a franchisor responsible for privacy violations of its franchisees in part  because the franchisees’ actions were documented on the franchisor’s email system, as how many franchisees’ actions are not?

A great example of somebody who just left his blindfold and maybe his brain in the car is the automotive executive who said at a conference that  “We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.”  When his statement was reported, he immediately retracted it, stating that “We do not track our customers in their cars without their approval or their consent.”  (Note to consumers: Check your purchase agreements, leases, financing agreements and insurance policies/applications.)

There are global and industrial-strength issues, however, in which we cannot get away with a blindfold–cannot avoid the benefits and burdens of foresight–as easily as we can with privacy and fairness.  How much longer will commercial airliners be flying without universal GPS tracking after Malaysia Airlines Flight 370?  Even by 2008, there were more non-human nodes online than people; by 2020, there may be more than 50 billion non-human nodes online.   When people are not the surveillance targets, the technology designer, manufacturer or user has more trouble donning a blindfold.  Imagine inspecting a factory or a construction worksite wearing Glass.  Thousands of pictures have been taken of everything, and have been uploaded into tools capable of predictive pattern detection as never before, as a scan with Glass of the  East New York Long Island Rail Road station in the 1920s might have revealed scales that would fall even if some fireworks exploded far away on the train platform.  Even if exposures could not have been detected, there are still all of those pictures, each potentially more damning than the “smoking gun” email in or out of context in today’s product liability litigation.

And what of the machines, buildings and other products created in the big data economy, sending off information about everything around them for their entire lives?   The scales themselves at the train station would be admitting their proclivity to fall 24/7, perhaps identifying elevated threat levels when packages containing fireworks entered the train station, themselves of course broadcasting 24/7.    Who will be responsible for being on the receiving end of such messages for the entire life of a building, a bridge or an airplane?  Is that the same entity that will keep the software updated and protected for that whole life?   The same entity responsible for incident response and safety?   Or, with no range of apprehension to limit the duty to be obeyed, will we allocate non-consensual duties in new ways?

Palsgraf  may be the most famous U.S. case of the 20th Century on non-consensual duties (torts, but also property and statutes).   FIPPs or no FIPPs, non-consensual duties look like they are multiplying exponentially in the emerging internets of everything, if only because there will be so many interactions with so many different types of intelligent objects in any moment as to make even the best efforts to create meaningful choice incomplete.  So it looks like we’re headed into another industrial revolution in which individuals may be swept up in systems over which they have no control (like the satanic mills), and therefore non-consensual duties will play a big role, both outside and within “trust networks,” in the areas of  both privacy and fairness.   We need norms for privacy and fairness, which is what the current White House big data initiative is about.   We can be so much better than we were in the Palsgraf era, though, because when we take off the blindfold, the probabilities of harms and benefits can be infinitely more contextually informed now than they were then through the use of predictive analytics itself, and better-informed judgments about risk of harm can be the basis for more well-grounded non-consensual duties of privacy and fairness.

Posted on Monday, March 31 2014 at 5:51 pm by

Fifth Circuit Upholds Finding That Overly Broad Confidentiality Agreement Violated the National Labor Relations Act

670px-Fill-out-Confidentiality-Agreement-Step-3

Authors

We have reported in previous Legal Alerts that the National Labor Relations Board (“NLRB”) is closely scrutinizing employers’ personnel policies and workplace rules to identify language that unlawfully restricts employees’ rights under the National Labor Relations Act (“NLRA”). On March 24, 2014, the United States Court of Appeals for the Fifth Circuit upheld an NLRB finding that an employee confidentiality agreement’s provision prohibiting employees from disclosing “financial information” and “personnel information and documents” to outsiders violated the NLRA.

The Court’s Decision in Flex Frac Logistics, L.L.C. v. NLRB

The NLRA gives employees the right to act concertedly for the purpose of collective bargaining or other mutual aid and protection. The NLRB and the courts have recognized that this statutory provision gives employees the right to discuss wages and other terms and conditions of employment with each other and with a union. An employer’s interference with this right, whether by disciplining employees who exercise the right, by expressly prohibiting such conduct, or by merely maintaining a workplace rule that chills the exercise of the right, is an unfair labor practice under the NLRA.

In its recent decision in Flex Frac Logistics, L.L.C. v. NLRB, the Fifth Circuit reviewed an NLRB decision finding that an employee confidentiality agreement unlawfully interfered with employees’ right to engage in protected concerted activity. The confidentiality agreement prohibited employees from disclosing confidential information to anyone outside the company and defined “confidential information” to include “financial information” and “personnel information and documents.” Although the agreement did not explicitly prohibit the disclosure of wage information, the Fifth Circuit found sufficient evidence to support the NLRB’s conclusion that the agreement would chill employees’ exercise of their rights under the NLRA because employees would reasonably believe that the agreement prohibited the disclosure of wage information. The court noted that the agreement’s definition of confidential information included “financial information,” which necessarily encompasses wages, and that the reference to “personnel information” was not limited so as to exclude wage information. Moreover, the court stated that the NLRB did not have to base its finding of an unfair labor practice in this regard on evidence that employees did, in fact, interpret the confidentiality agreement as restricting their disclosure of wage information to outsiders. It is sufficient, the court held, that the language of the agreement would reasonably tend to chill employees’ exercise of their NLRA rights. Because the language at issue here could reasonably be interpreted as barring disclosure of wage rates, the court upheld the NLRB’s ruling that the confidentiality agreement violated the NLRA.

Practical Implications

In recent years, the NLRB has applied increasing scrutiny to employee handbooks, workplace rules, and employment contracts to identify provisions that may reasonably be interpreted as prohibiting conduct protected by the NLRA, even going so far as to find a handbook provision requiring employees to be courteous to others unlawful. As the Flex Frac Logistics decision illustrates, employers cannot rely on the courts to rein in the NLRB’s excesses in this area. Employers should therefore be proactive in reviewing their personnel policies and employee agreements to ensure that they do not contain provisions that are so broadly worded as to infringe upon employees’ right to act in concert with respect to the terms and conditions of employment. Often, the addition of only a few clarifying words can make the difference between a lawful rule that furthers the employer’s legitimate business interests and an unlawful rule that can lead to costly unfair labor practice proceedings.

Posted on Tuesday, March 18 2014 at 1:38 pm by

New York Department of Financial Services Begins Accepting Applications for the Establishment of Regulated Virtual Currency Exchanges

20130411-Bitcoin-Theft-Mt-Gox-450x300

On March 11, 2014, the New York Department of Financial Services (“NYDFS”) issued a public order announcing that the NYDFS will consider formal proposals and applications for the establishment of regulated virtual currency exchanges operating in New York. It is expected that the NYDFS will expand its oversight to include those virtual currency exchanges doing business with New York residents. The NYDFS stated that formal proposals and applications may be submitted immediately and may be modified by the applicant through discussions with the NYDFS during the application process to ensure strong legal and operational controls, including anti-money laundering (“AML”), cyber security and consumer protections.

The Order is the NYDFS’ latest step toward regulation following its January 2014 public hearing,  which explored potential regulatory frameworks for virtual currency-related transactions. This move by the NYDFS is also a result of recent events, including the collapse of Mt. Gox, the vulnerabilities in the virtual currency markets and the need for stronger oversight through regulation. The NYDFS also stated that it continues to work on regulations, including a “BitLicense” specific to virtual currency transactions and activities, and intends to propose a regulatory framework no later than the end of the second quarter of 2014.

Although the Order does not provide specific guidelines for virtual currency exchange applications and proposals, we expect that the requirements will be similar to money services business, potentially including bond and net worth requirements. Firms should also ensure their proposals include robust internal control systems covering AML, cyber security and consumer protections. Specifically, and similar to other financial institutions covered under the Bank Secrecy Act and its implementing regulations, applicants should establish AML programs tailored to virtual currency transactions that include (i) the development of internal policies, procedures, and controls to combat money laundering; (ii) the designation of a compliance officer; (iii) an ongoing training program; and (iv) an independent audit function to test the program.

For more information, please contact any member of the Financial Institutions team.

Aaron M. Kaslow 202.508.5825 Akaslow@kilpatricktownsend.com

Michael A. Mancusi 202.824.1419 Mmancusi@kilpatricktownsend.com

Stephen F. Donahoe 202.508.5818 Sdonahoe@kilpatricktownsend.com

Erich M. Hellmold 202.639.4734 Ehellmold@kilpatricktownsend.com

Kevin M. Toomey 202.508.5859 Ktoomey@kilpatricktownsend.com

Posted on Thursday, January 30 2014 at 3:54 pm by

An Open Letter to the Next Four Retailers to Suffer Breaches of their POS Systems

ree[4]

A 17-year-old Russian boy named Sergey Taraspov, who uses the name “ree[4],” writes software he calls BlackPOS or “Kaptoxa” that can scrape payment card information during the moment of the swipe from point of sale (POS) systems, the one moment at which the card information is decrypted and (for the sensitive information) not yet destroyed.  He sells it “off-the-shelf” to Eastern European crime rings for $2,000 or a percentage of sales of personal information.   Once it’s in a retailer’s systems (e.g., through a targeted phish), it doesn’t stop with the PII but can send back so much more.

Seven retailers are targeted and invaded by Kaptoxa.  Three of them—HouseofStraw, HouseofSticks and HouseofSand – are outed by security blogger Brian Krebs before they tell their customers about it.  HouseofStraw and HouseofSticks wait until their lawyers and “crisis managers” and law enforcement say it is OK to disclose, then do so, dutifully offering useless (because payment card numbers can simply be changed) and expensive credit monitoring in messages that are crafted badly both from CRM (because they are written by “crisis managers”) and technical (because they look like phishing emails) standpoints.  HouseofSticks later offers even less useful credit monitoring to victims of a massive, related breach of email addresses.  Both of them are facing numerous customer class actions and state and federal investigations, and HouseofSticks a shareholder derivative suit.  HouseofSand is outed earlier in the forensics process, and acknowledges the issue, but has little to report and no strong action plan for its customers.  Three of the remaining retailers don’t know yet.

You represent HouseofBricks, which is just discovering (through its merchant processor) the first evidence of exfiltration of card numbers. What should HouseofBricks (HoB) do?

  • Immediately,  as soon as it sees the slightest sign of trouble and before any investigation, HoB should contact as many customers as could conceivably be involved and tell them something like: “We just discovered there may be an issue with some credit card numbers.  We’re investigating fully and will keep you fully apprised, but want to warn you to check your bank/credit card statements immediately, and if you see any strange charges, call your bank/issuer and get it to reissue your card, also immediately.  Then please call to tell us right afterwards.”  House0fSticks should also immediately warn customers of the email breach and give them tips on avoiding phishing attacks.  Why?
    • The banks have 13 months within which to decide whether to reissue cards, and in those 13 months the fraud costs are piling up, costs that will be shifted to HoB, so the banks can afford to let them pile up.  Remember, this is the same US payment card system that hasn’t yet introduced the more secure credit cards being used in Europe and elsewhere.
    • The major harm to be avoided from an email breach is phishing attacks, and the only way to prevent them is to warn consumers what to watch for.
    • For once, consumers know that new and sophisticated malware is invading retailers, so they will not blame HoB for disclosing the issue, particularly if it does so in a way that is faster and more effective than the previous three.
    • The holiday shopping season is over.
  • If HoB communicates with its customers by email, it should not send emails containing links on which to click for more information, or otherwise include features in the email which are likely to make reasonably paranoid consumers assume that they are phishing emails.
  • HoB should not offer credit monitoring.   It’s not the best way to prevent harm in a credit card breach; card reissue is.  And in an email breach, credit monitoring is irrelevant and useless in relation to the primary phishing risk.  Credit monitoring is both expensive and absurdly time-limited (particularly in a market so rich in stolen card numbers, where there’s never any problem for the black market in just waiting a year until the monitoring expires), and there are so many better ways to spend that money building trust with HoB’s customers.   If the customer does not want to cancel the card because it would mean unravelling too many online or automatic payment relationships, there are better ways to get long-term protection.  (Credit monitoring makes more sense for breaches of types of information that retailers generally don’t have except on their employees, such as social security or driver’s license number breaches.)
  • HoB should communicate early and often with the appropriate state AGs/regulators/law enforcement, and take advantage of federal law enforcement knowledge of Kaptoxa.  If law enforcement objects to the open customer outreach strategy, HoB may want to politely point out that the criminals are well aware that they have been detected, but are happily going about the trade in personal information on open internet websites.
  • Once HoB investigates and determines containment and remediation costs, it may want to weigh the costs of outsourcing its point of sale (POS) system entirely or solely in connection with capture of cardholder information, either forever or until more secure credit cards are introduced.  Payment processors majoring in security have developed systems of “aliases” that would allow HoB to build customer profiles without exposure to cardholder information.  And creating that huge, integrated national or global POS system, which HoB did in part to try to improve security, may have only succeeded in turning HoB into a really big “target.”
Posted on Tuesday, January 7 2014 at 4:16 pm by

A Recipe for New Info Governance and Data Asset Protection

A Big Data Information Governance Model

[This new article in Law 360 incorporates insights from a number of previous blog posts to build the case for one big data information strategy incorporating both governance and data asset protection.  Apologies and caveats to regular readers for the redundancies. As always, criticism eagerly sought.]

Law360, New York (January 07, 2014, 1:22 PM ET) – Governance over shared and open data and high-velocity and agile processes is not duck soup, but it is something organizations being transformed by big data and the industrial Internet have to do. It means big changes in information security, privacy, records management, defensible disposal, intellectual property and trade secrets protection programs that few organizations, lawyers, regulators or managers have faced yet.

Most lawyers still advocate records management and defensible disposal programs that get rid of all non-record documents as soon as they are no longer needed, even as most of those documents are increasingly to be found in dynamic databases some but not all of which will be increasing sources of value with the use of new analytics tools. The logic of defensible disposal has not changed regarding many types of documents the retention of which pose greater risks than value; that logic just needs to be balanced by intelligent intuitions of how certain documents and data with very limited value now may become much more valuable in the future.

I consider some such dynamic databases of such sleeper documents as candidates for data lakes. Call them “lakes” because unlike in orderly warehouses, big things swim at you fast when you so much as cast your line. Data lakes need two things from a privacy and security standpoint to compensate for relaxation of traditional controls on collection such as data minimization and location of trade secrets in more controlled environments. The rationales for those traditional controls on collection are that:

  • decisions on use are not trusted, and
  • large databases become malware targets and breach victims.

Thus the two things data lakes most need are:

  • trustworthy, transparent and accountable controls, decisions and decision-makers regarding use, and
  • really good information security.

So much easier said than done, of course, but I also know a number of organizations that are getting incredibly serious about not just the second one (and see below on how hard that is), but the first one, trusted controls. Trusted controls in the big data world are by no means just about privacy, but ultimately matters of the ethics and compliance of algorithms. Big data classifies in ways that run afoul of employment law, human rights law, sector-specific prohibitions on discrimination and a multitude of other standards. An important role for the lawyers and compliance and ethics officers going forward is assessing the fairness and appropriateness of algorithms.

The records and information management programs of the present and future need to balance defensible disposal with data lakes. In some cases, we have used a zoning-like approach in which certain areas are designated for data lakes, and the construction of the two critical controls then needs to begin.

Of course the big data initiative usually involves not just those lakes, but the integration of new varieties of information from outside of the organization. This I call bringing in the Internet ocean. Often, the data being brought “in” is vast, high-velocity and very unfamiliar to the organization; it does not meet the organizations quality standards and is initially unintelligible in some respects; it often needs to be tagged and valued as new “data asset classes.”

The big data tools encourage the incorporation of as much of this “great unwashed” data as the organization wants into its information stores, but lawyers should watch out carefully for the movement of massive databases into their clients’ possession or control, lest a preservation obligation require a legal hold that will create a “digital landfill” much more massive than was previously imaginable.

Then there are the challenges to data rights posed by sharing and agile processes. Intellectual property law will continue to have a very tough time keeping up with data asset protection needs, so the protections will default to trade secrets and very carefully crafted ownership and use rights for original data, usage data and inferences (or derived data) in contract terms. Organizations will therefore need data asset protection programs focused more on trade secrets and contract terms than in the past.

Big data is blowing up information governance programs, however, not just through rendering old programs inadequate, but by pointing the way to new, “big data”-driven compliance and risk management programs. This is most obvious now in information security, where traditional intrusion detection has been supplemented with exfiltration prevention based on advanced analytics. The “100 percent auditability” of big data demands very important choices by all those responsible for information governance about where to create visibility.

Next, we will apply these concepts to a common scenario.

How to Build Your New Governance and Data Asset Protection Programs

This section will get into the new information risk management concretely, focusing on how you can use the big data initiative to define new trade secrets and new ways of protecting them, with implications for your contracts involving data and your development of data asset protection plans. It will also address what should and rarely does happen before bringing in external data sources and streams.

Internal Databases

Let us say your organization is starting to figure out how to get more value from its own databases. In most cases, it is important to recognize that patent and copyright law (in the U.S. vs. Europe, where the Database Directive (96/9/EC) provides copyright-like protection to “authors” invested in the contents or presentation of their databases) are likely to offer only limited protections, so most of your efforts to protect the information must focus on careful definition and protection of trade secrets and contractual rights associated with the raw and inferred data and databases.

So consider this approach: As your organization is identifying types of data and repositories that are of interest for the big data initiative, it may be viewed as essentially defining those types and repositories trade secrets requiring special new protection. For trade secret protection under the Uniform Trade Secrets Act, you need to show reasonable secrecy measures and economic value from those secrecy measures, and secrecy can be achieved through agreements, policy, training and infrastructure. Therefore:

  • Everybody handling those types of data you anticipate using now or in the future could get confidentiality agreements beyond their general obligations to protect company assets;
  • Careful protection of ownership and use rights and clean data destruction of the raw, usage and derivative data in any contracts with analytics vendors is critical to protecting both the data from security and privacy standpoints and its trade secret status;
  • Policies could be modified to focus on data asset protection from a trade secrets perspective, requiring secrecy and protection;
  • The information security levels assigned to those types of data could be the levels accorded sensitive information (more on this in c., below);
  • Particularly if the data sources contain personal information, focusing on trustworthy, transparent and accountable controls, decisions and decision-makers on use and/or the ever-changing standard of reasonableness in anonymization will only become more critical from a privacy standpoint and will bolster trade secrets arguments as well; and
  • Training programs could stress the designation of data as trade secrets and the importance of continued efforts to protect the data as trade secrets.

External Databases

Even if the initiative begins with a focus on extracting value from data already possessed by the organization, that focus likely leads to incorporation of new data types, such as machine-to-machine and social data, and other data streams from outside the organization. Legal needs to weigh in before appropriations, for many reasons, including:

  • The ownership and use rights associated with the external data, and the ways in which they affect the ownership and use rights and trade secret status of derived data and inferences as well as internal data, are critical;
  • If the external data is brought into the organization’s custody and control, as the big data storage/analytics tools encourage, and any of it might subject to existing preservation obligations resulting from reasonably likely or pending litigation or investigations, the organization may be forced to expand its legal holds and begin to grow “digital landfills” of unprecedented size;
  • The organization may have regulatory or other duties, such as privacy or information security obligations, to understand, manage and/or protect the information once it possesses and controls it; and
  • Antitrust concerns should be examined in some cases.

Drawing the Information Security Line, and Current Challenges

Once we focus on all the shared and open data, we are ready to draw the critical information security line between the more highly secured data and everything else. On one side of the line are the trade secrets and IP and the protected personal information (PPI), and any other sensitive or confidential information or other information you are obliged to or want to protect. On the other side are the disposable data and much of the open and shared data (which overlap with the PPI, requiring secure destruction of the latter). The reason this line is particularly difficult in a big data context is because most big data tools were constructed for extremely fast parallel processing and inexpensive storage of massive volumes of data, presenting the security challenges that can be summarized as follows:

  • Massive parallel processing through rapid incorporation of nodes; therefore
  • No inherent authentication of nodes; therefore
  • Danger of rogue nodes; plus
  • No role-based access once you’re into a node or cluster; plus
  • No encryption between nodes.

Apache Big Data Architecture

Of course, this very architecture also enables the smart data security of focused data loss prevention mentioned in Section 1, but 2014 will be a big year for big data system “hardening,” possibly including node authentication, logging tools, security between nodes, file-layer encryption on each node and pre-deployment validations. The importance not only of information security but trade secret protection make these important due diligence issues for 2014 initiatives.

Refining Defensible Disposal

As you identify all the data types you may want and need to protect as trade secrets and those that have continuing value as shared and/or open data, you can also use that knowledge to improve or jump-start a defensible disposal program for the other data stores, and particularly the ones that come to appear worthless as you’re examining the new trade secrets.

In the longer term, these insights and the new trade secrets will help your records and document management programs and database governance programs to balance “data lakes” and defensible disposal, through making better-informed judgments about information and data that has ongoing value, also enabling more defensible and informed judgments about the useless data — or the data types the cost or risk of harm of which exceeds their worth — that can and should be destroyed.

Summary: A Recipe for New Information Governance and Data Asset Protection

  • Distill your new trade secrets as you make your big data plans, using the recipe described above
  • Bring in open, shared and/or Internet data to taste, defining ownership and usage rights — particularly in the inferences — carefully, and not necessarily integrating them into your information stores if you want to avoid preservation obligations.
  • Do what you can to specially protect the trade secrets, IP and protected personal information (PPI), and any other sensitive or confidential information or other information you are obliged to or want to protect.
  • Use the greater visibility into the data stores to identify areas of data that are very unlikely to have any value in any business, investigative or litigation context, which become your most cost-effective candidates for defensible disposal.
  • Shake it well, try lots of different ways per second of cooking and combining it and detecting patterns, and please let me know what you’re coming up with that’s most useful to you.

Jon Neiditz is a partner in Kilpatrick Townsend’s Atlanta office, where he leads the firm’s privacy and information security practice.

 

Posted on Wednesday, December 25 2013 at 12:52 pm by

Baking Your Big Data Information Governance Program; In Celebration of 2013

A Big Data Information Governance Model

We all know the feeling; relatives on their way over, and no time left to bake a big data information governance program from scratch.  Got you covered; knew you were coming, so the elves baked one for you while you slumbered snugly; dropped it down your digital chimney faster than you could click on Delivery Drone Shipping Option.  Not quite heat ‘n’ serve, but hope it will open the door to some useful discoveries.

We started this blog in April by sharing the recognition that the traditional privacy rules need massive change thanks to big data, and a call for more systematic big data strategy.  A substantial part of the year (a series of 13 posts) chronicled what I called in June “the most consequential revelations about the federal government of our lifetimes,” which have already inspired not only the expected global economic changes, but a response  that in my humble opinion should give us great faith even now in the future of Democracy in America.  Getting back to our knitting, we have more recently been focused how big data is blowing up information governance beyond algorithm compliance and ethics (not just privacy), including transforming records management and defensible disposal, and how, because IP in the US can’t keep up with big data, trade secrets and contractual protection of data become important to strategy (and in part just because of how counter-intuitive and Scrooge-like that perspective is to many big data practitioners).  But in this season of sharing, the incompleteness of the trade secrets approach to big data is so stark; so much of big data is necessarily shared and open data, regardless of how much of your own data you choose to share, particularly when you “bring in the internet ocean.”

Once we focus on all the shared and open data, we are ready to draw the critical  information security line between the more highly secured data and everything else.  On one side of the line are the trade secrets and IP and the protected personal information (PPI), and any other sensitive or confidential information or other information you are obliged to or want to protect.  On the other side are the disposable data and much of the open and shared data (which overlap with the PPI, requiring secure destruction of the latter).   “Duh,” you say, “for this I gave up watching the Duck Dynasty marathon?”  The reason this line is particularly difficult to draw in a big data context is because the big data tools were constructed to make information security more difficult, might be said to be now where cloud computing was, well, a while ago.  The security challenges can be summarized as follows:

  • Massive parallel processing through rapid incorporation of nodes; therefore
  • No inherent authentication of nodes; therefore
  • Danger of rogue nodes; plus
  • No role-based access once you’re into a node or cluster; plus
  • No encryption between nodes.

Apache Big Data Architecture

So there is a lot for us information risk managers to worry about.  To continue geeking out for the remainder of this paragraph, there are:

  • Node authentication (E.g., Kerberos), balancing performance issues;
  • Logging tools that leverage the cluster to store events so they scale with the clusters (e.g., Splunk, open source alternatives);
  • Big data monitoring tools (that scale like big data and use big data velocity capabilities, for, e.g., malware detection or data loss prevention);
  • Security between nodes, not just between cluster and client, as the real “data in transit” issue;
  • File layer encryption on each node with good key management, plus other encryption options, all weighed against performance issues; and
  • Pre-deployment validation.

OK, sorry, so I owe you more plain English and explanations than that, but suffice it to say that 2014 is going to be a big year in information security for big data, now that these vulnerabilities and opportunities for improvement are out in the open for all to see.   Right now, I want to give her/him among you whose inner ascetic, like mine, is being inspired by this materialistic holiday to reach for ideas, something to replace the visions of sugarplums that may still be dancing in your heads.  So here’s one possible family recipe for big data information governance:

  • Distill your new trade secrets as you make your big data plans, using the recipe described in the last post.
  • Bring in open, shared and/or internet data to taste, defining ownership and usage rights–particularly in the inferences–carefully, and not necessarily integrating them into your information stores if you want to avoid preservation obligations.
  • Do what you can to counter the temporary insecurity of big data tools to specially protect the trade secrets, IP and protected personal information (PPI), and any other sensitive or confidential information or other information you are obliged to or want to protect.
  • Use the greater visibility into the data stores to identify areas of data that are very unlikely to have any value in any business, investigative or litigation context, which become your most cost-effective candidates for defensible disposal.
  • Shake it well, try lots of different ways per second of cooking and combining it and detecting patterns, and please let me know what you’re coming up with that’s most useful to you.

Hope your holiday is filled with wonderful discoveries.

Evie

 

Posted on Monday, December 16 2013 at 11:55 am by

How Best to Dive into Big Data: Focus on Trade Secrets

gentoo-penguins-jumping-in-water_24700_600x450

This post is for in-house lawyers and information and risk managers who find themselves facing a relatively sudden plunge into mysterious lakes and oceans of big data.  For the last few years I’ve been developing legal frameworks for organizations diving into big data, e.g.:

Intro to Big Data Legal FrameworkThese frameworks have not really made it into this blog before in the pragmatic way they live in my practice; the blog has tended rather to emphasize and predict large-scale transformations, for example those needed in privacy programs and their regulation, and those now appearing on the horizon for information governance programs, or to make other relatively high-level observations.   This post will get into the first stages of big data risk management in a much more concrete way, focusing on how you can use the big data initiative to define new trade secrets and new ways of protecting them, with implications for your contracts involving data and your eventual development of  what I call “data asset protection plans.”  It will also address what should and rarely does happen before bringing in external data sources and streams.

1. Internal Databases

Let us say your organization is starting to figure out how to get more value from its own databases.  In most cases, it is important to recognize that patent and copyright law (in the US vs. Europe, where the Database Directive (96/9/EC) provides copyright-like protection to “authors” invested in the contents or presentation of their databases) are likely to offer only limited protections, so most of your efforts to protect the information must focus on careful definition and protection of trade secrets and contractual rights associated with the raw and inferred data and databases.

So consider this approach:  As your organization is identifying types of data and repositories that are of interest for the big data initiative, it may be viewed as essentially defining those types and repositories as trade secrets requiring special new protection.  [Please, before you revoke my membership in the @Lessig Fan Club, see the next post, which integrates shared and open data.] For trade secret protection under the Uniform Trade Secrets Act, you need to show reasonable secrecy measures and economic value from those secrecy measures, and secrecy can be achieved through agreements, policy, training and infrastructure.  Therefore:

  1. Everybody handling those types of data you anticipate using now or in the future (whether on the big data project or not) could get confidentiality agreements beyond their general obligations to protect company assets;
  2. Careful protection of ownership and use rights and clean data destruction of the raw, usage and derivative data in any contracts with analytics vendors is critical to protecting both the data from security and privacy standpoints and its trade secret status;
  3. Policies could be modified to focus on data asset protection from a trade secrets perspective, requiring secrecy and protection;
  4. The information security levels assigned to those types of data could be the levels accorded sensitive information (and note that 2014 is said to be a year in which big data tools receive much-needed enterprise security “hardening”);
  5. Particularly if the data sources contain personal information, focusing on trustworthy, transparent and accountable controls, decisions and decisionmakers on use and/or the ever-changing standard of reasonableness in anonymization will only become more critical from a privacy standpoint and will bolster trade secrets arguments as well; and
  6. Training programs could stress the designation of data as trade secrets and the importance of continued efforts to protect the data as trade secrets.

Diving-in-Madeira-02

2.  External Databases

Even if the initiative begins (as many do) with a focus on extracting value from data already owned and possessed by the organization, before you know it the organization will be incorporating new data types, such as machine-to-machine and  social data, and other data streams from the outside.   Legal needs to weigh in before these moments, for many reasons, including:

  1. Again, the ownership and use rights associated with the external data, and the ways in which they affect the ownership and use rights and trade secret status of derived data and inferences as well as internal data, are critical;
  2. If the external data is brought into the organization’s custody and control, as the big data storage/analytics tools encourage, and any of it might subject to existing preservation obligations resulting from reasonably likely or pending litigation or investigations, the organization may be forced to expand its legal holds and begin to grow a “digital landfill” much larger than any it has seen in the past;
  3. The organization may have regulatory or other duties, such as privacy or information security obligations, to understand, manage and/or protect the information once it possesses and controls it; and
  4. Antitrust concerns should be examined in some cases.

3.  Trade Secrets and Defensible Disposal

As you identify all the data types you may want and need to protect as trade secrets, you can also use that knowledge to improve or jump-start a defensible disposal program for the other data stores, and particularly the ones that come to appear worthless as you’re examining the new trade secrets.  In the longer term, these insights and the new trade secrets will help your records and document management programs and database governance programs to balance “data lakes” and defensible disposal, through making better-informed judgments about information and data that has ongoing value, also enabling more defensible and informed judgments about the useless data — or the data types the cost or risk of harm of which exceeds their worth — that can and should be destroyed.

Kids-Sea-Camp-Family-Diver

Posted on Sunday, November 24 2013 at 2:13 pm by

Five Myths of Data Breach and its Regulation, and How to Shatter Them

Worshiping_the_golden_calf

As lawmakers and regulators around the world are implementing data breach notification laws and rules, they will be subjected to many consultants and lobbyists with limited experience in dealing with data breaches and notification laws.   Particularly in health care, data breaches tend to be presented as betrayals of fundamental obligations of patient confidentiality best deterred through punishment of organizations experiencing breaches.   One problem with this view, as breach regulators have seen as they become more familiar with the area, is that it forces breaches underground, depriving patients and others of critical opportunities to protect themselves and preventing valuable information-sharing about threats and their management.    This article summarizes lessons learned from eight years of dealing every week with breach notification laws and managing breach response to attempt to enable legislators, regulators, lawyers and risk managers around the world to be prepared for the misinformation coming their way.

1.  “Your organization and clients are not suffering regular breaches.”

Every health care organization is suffering regular breaches of personal information, not to mention trade secrets and intellectual property.  All information systems – within and outside of health care – are vulnerable, and most are infected with malware to a greater or lesser extent.   That is why information security has moved from security at the perimeter of the network to focus also on monitoring of activity within the network, including detecting and stopping the efforts of malware to send personal information and trade secrets to bad sites (sometimes called data loss prevention).    Health care as currently structured is an inherently decentralized and porous system, and therefore inherently insecure as compared with, say, nuclear power facilities, while it generally does not yet have the sophisticated data loss prevention systems of, e.g., nuclear power facilities.  Health care institutions, networks, exchanges and portals – online as in the physical world – all involve many players sharing data, entering and exiting, linking through multiple endpoints.

2. “The purpose of data breach notification law is primarily to expose and punish organizations with weak security.”

If health care organizations are inherently insecure, and if the more sophisticated their security systems become, the more they know about the breaches they are suffering, then to have a breach and find out about it is not necessarily a sign of weak security at all; indeed it may be a sign of improving security.   And notifying patients and others whose information was breached is therefore not principally a confession of bad security; so what is it?   For the individuals notified, if they are notified quickly enough it is primarily an opportunity to protect their own identities and/or financial accounts against theft and fraud.   For other parts of the organization and similarly situated organizations, it is a chance to learn about the threats and improve security and incident response systems.  It MAY also be a confession of bad security or poor response to a breach; that is for the individuals (who can vote with their feet, ceasing to be patients or customers) and regulators to figure out.

Punishment establishes important deterrent effects; that is why it is so important not to deter the wrong behaviors.    Therefore, the more thoughtful regulators punish organizations not for simply having breaches—which deters immediate disclosure—but for disclosing  too slowly or otherwise responding poorly to the breaches, or based on investigative findings for failing to deal effectively with their other security obligations.   In other words, deterrents are structured to incentivize prompt and open communication with breach victims and regulators.

3. “Breaches of healthcare information will make you lose patients or customers, and severely damage your brand.”

This statement was true in the US in 2005, when breach notification laws spread across the country as a result of the ChoicePoint breach, and two studies showed customer losses of 20% or more of the customers receiving breach notices.   Health care in the US was introduced to this breach-as-stigma view – which is based on the mistaken belief discussed above that having a breach indicates poor security –much later:  In 2009, the HITECH Act and HHS regulation establishing a more regulatory rather than a consumer notice-focused framework, including what the health care world universally dubbed the “Wall of Shame,” a webpage on which the names of organizations suffering breaches of 500 records or more were posted, signaling also that they were about to be subjected to long federal investigations (breaches of fewer records were not until recently subject to them).  Nonetheless, as organizations became more effective at communication about breach, and consumers got used to receiving notices, the effect has lessened.

In recent years, organizations with the best security in health care know they are experiencing breaches regularly.  That being the case, breaches are best seen as essential, challenging but perhaps not-too uncommon moments in customer relationship management.   To deal effectively with the customer relationship implications of a data breach, one needs strong, clear, and above all credible and accurate narratives regarding (a) the security program that existed before the breach, (b) immediate and effective breach response – including very prompt containment of all systems issues and other vulnerabilities (stop the bleeding!) and notice to breach victims and regulators as soon as possible, sometimes including informal and immediate notice such as outbound phone calls or emails (see 5, below) – and (c) the prompt and effective security remediation after the breach to prevent recurrence of the type of breach in question if possible.   This need to get your act together so that your story will fly with ever-more-sophisticated B2B customers and consumers  who can walk with their feet and social media critics is what has made breach notification – particularly when the regulators strongly encourage it – in my experience more effective than regulation as a “nudge” to better security. Sometimes it is a story of good initial security and breach response, and sometimes – as with ChoicePoint/LexisNexis — it is a story of transformation.

4. “A requirement to tell regulators about a breach makes some sense in the absence of a requirement to tell consumers about a breach.”

If the primary purpose of breach notification is to enable individuals to protect themselves as suggested in 2, above, and breach notification is being treated by organizations with advanced security and breach response as a fairly ordinary if challenging moment in customer relationship management as suggested in 3, above,  then a requirement to tell regulators but not to tell consumers about breach risks losing the very essence of what makes breach notification law so powerful.    And given my view that the effectiveness of breach notification law rests on the need of sophisticated organizations to give data breach an essential place in customer relationship management, you can well imagine what I think (and what I believe all organizations should think) about the latest  proposal from the LIBE Committee of the European Parliament  that National Data Protection Authorities be empowered to conduct direct breach notification to your patients and customers.

5.  “Breach notification and consumer protection should happen in the same way in all breaches.”

Different types of information breached, types of victims and other dimensions of breaches should in fact drive very different responses.  In many types of breaches, such as social security number breaches in the US, what the victims need is to get a service such as credit monitoring  in place as quickly as possible, notifying victims through the formal notice letter that contains enrollment codes.  Credit monitoring may do absolutely nothing for many victims of medical information breaches, for whom specialized identity theft prevention and restoration services make sense, or for children.   For victims of credit card breaches, many clients have benefited from sending an immediate email at the first sign of trouble and long before any investigation, saying in effect, “We just discovered there may be an issue with your credit card number.  We’re investigating fully and will keep you fully apprised, but want to warn you to check your bank/credit card statements, and if you see any strange charges, call your bank and get it to reissue your card.”  Since the banks have thirteen months in which to decide whether it’s worth it to them to reissue cards, and all the estimated fraud costs are passed back to the “merchant” suffering the breach, this immediate warning cuts off the possibilities of both consumer hassle and what can be large estimated fraud costs to be imposed on the merchant.   The major risk associated with email address breaches is phishing attacks, so an immediate email warning of such attacks can also be immensely effective.

 Ponemon

Source: Ponemon Institute 2011 Cost of Data Breach Report

The more interesting aspect of customized breach response, however, comes from examining the knowledge of their customers that all organizations have or are developing and using it to craft the most helpful and satisfactory message, remediation approach and mode of communication for each group of customers or individual.   This application of customer relationship management knowledge and technology is the best way for organizations to save money in data breach, because by far the largest costs associated with data breach can be lost customers and other symptoms of diminished customer trust (see illustration, and note that the total cost per record has already dropped to $136 according to the 2013 Report).    This CRM approach to breach, on top of the strong, clear, credible and accurate narrative described in 3, above, enables even data breach to be an enhancer of trust.

Posted on Wednesday, November 13 2013 at 12:45 pm by

How Big Data is Blowing Up Information Governance

New Information Governance

Governance over shared and open data and high-velocity and agile processes is not duck soup, but it is something organizations being transformed by big data and the industrial internet have to do.  It means big changes in information security, privacy, records management, defensible disposal, intellectual property and trade secrets protection programs that few organizations, lawyers, regulators or managers have faced yet.   This post will begin to lay those changes out.

Most lawyers still advocate records management and defensible disposal programs that get rid of all non-record documents as soon as they are no longer needed, even as most of those documents are increasingly to be found in dynamic databases some but not all of which will be increasing sources of value with the use of new analytics tools.  The logic of defensible disposal has not changed regarding many types of documents the retention of which pose greater risks than value; that logic  just needs to be balanced by intelligent intuitions of how certain documents with very limited value now may become much more valuable in the future.

I consider some such dynamic databases of such sleeper documents as candidates for data lakes.  Call them “lakes” because unlike in orderly warehouses, big things swim at you fast when you so much as cast your line.   Data lakes need two things from a privacy and security standpoint to compensate for traditional controls on collection such as data minimization and location of  trade secrets in more controlled environments.  The rationales for those traditional controls on collection are that  (1) decisions on use are not trusted and (2) large databases become malware targets and breach victims. Thus the two things data lakes most need are (1) trustworthy, transparent and accountable controls, decisions and decisionmakers regarding use and (2) really good information security.   So much easier said than done, I know, but I also know a number of organizations that are getting incredibly serious about  not  just  the second one, but the first one, trusted controls.  Trusted controls in the big data world are by no means just about privacy, but ultimately matters of the ethics and compliance of algorithms.  Big data classifies in ways that run afoul of  employment laws, human rights laws and a multitude of other standards.  An important role for the lawyers and compliance and ethics officers going forward is assessing the fairness and appropriateness of algorithms, or of the algorithms that create the algorithms.

The records and information management programs of the present and future need to balance defensible disposal with data lakes.  In some cases, we have used a zoning-like approach in which certain areas are designated for data lakes, and the construction of the two critical controls then needs to begin.

Of course the big data initiative usually involves not just those lakes, but the integration of new varieties of information from outside of the organization.  This I call bringing in the internet ocean.   Often, the data being brought “in” is vast, high-velocity and very unfamiliar to the organization; it does not meet the organizations quality standards and is initially unintelligible in some respects; it often needs to be tagged and valued as new “data asset classes.”   The big data tools encourage the incorporation of as much of this “great unwashed” data as the organization wants into its information stores, but lawyers should watch out carefully for the movement of massive databases into their clients’ possession or control, lest a preservation obligation require a legal hold that will create a “digital landfill” much more massive than was previously imaginable.

Then there are the challenges to data rights posed by sharing and agile processes, that will be explored more fully in subsequent posts.   Intellectual property law will continue to have a very tough time keeping up with data asset protection needs, so the protections will default to trade secrets and very carefully crafted ownership and use rights for original data, usage data and inferences (or derived data) in contract terms.  The newly-filed Zettaset v. Intel case, in which Intel is accused of appropriating Zettaset big data security trade secrets, may tell us a lot about the trade secrets landscape going forward in this regard.   In any event, organizations will need data asset protection programs focused more on trade secrets and contract terms than in the past.

Big data is blowing up information governance programs, however, not just through rendering old programs inadequate, but by pointing the way to new, big-data driven compliance and risk management programs.  This is most obvious now in information security, where traditional intrusion detection has been supplemented with exfiltration prevention based on advanced analytics.   The “100% auditability” of big data demands very important choices by all those responsible for information governance about where to create visibility.

DataTree

Subscribe to Kilpatrick Townsend's Legal Alerts to help you stay current of new and noteworthy legal issues that may affect your business.