Posted on Sunday, January 18 2015 at 8:57 pm by

A Cybersecurity Vision Better than Today’s State of the Union Draft


What if we not only trashed (as we will anyway) the failed cybersecurity proposals of the past decades to be rehashed by the President on Tuesday, but proposed an agenda that will not only protect us, but enrich and fulfill us? We have one day, a day devoted to the memory of Dr. King, before the State of the Union on Tuesday to think about cybersecurity. What can happen in a day?

Kudos to the President for taking the unusual step of vetting his legislative proposals — with cybersecurity at or near the top of the agenda — prior to the State of the Union. As expected, they included no new ideas, and while I do not agree with the Electronic Frontier Foundation that they are all bad ideas, that they are outdated – tweaks to familiar proposals on which the side-takers have taken sides — cannot be denied. Still, the responses to the proposals were so much more intelligent when the first responder was a much broader array of side-takers than the cocky spokesperson of the Congressional Majority.

So let them die, but let not their death be in vain as usual; let us try some bold creative destruction. The reason they fail, IMHO, is structural, as revealed in the way that the President had to roll them out. First he had to stop by the FTC, the keepers of the flame of “reasonable security” that burns so brightly for those who believe (1) that corporations are the ones principally responsible for cybersecurity, (2) that cybersecurity is principally about protecting personal information, and (3) that consumers are terribly harmed by credit card breaches. O ye FTC lovers, ye worshipers at the feet of mobled consent orders, see ye not what a dim bulb before which ye prostrate yourselves and call it Common Law? You talk the talk since 1973 in the hope that one day we may walk the walk, even as the world hurtles into a wildly insecure internet of things in which we will be lucky to walk at all (and NOBODY loves a good walk more than I).

The next stop, of course, had to be at the Department of Homeland Security, the NSA being still dangerous ground, because those agencies and others still serve as the keepers of both the offensive and defensive cybersecurity flames, contrary to the recommendations of the President’s own NSA Review Group. The Review Group knew that when the offensive and defensive cybersecurity missions are mixed, the offensive mission generally wins. Perhaps that is why the President’s proposals contains no clear, broad vision for the future of defensive cybersecurity, a vision like the one I am about to offer to you, and to him, right now.

What we need — and can have — as free societies not breeding a mandarin class in a hothouse, secured, lifetime university in Pudong – are citizenries bearing deep cyber-knowledge like Second Amendment advocates envision citizens bearing arms (cyber-knowledge of course being the true weaponry that can keep us free in this century). Widely distributed, deep cyber-knowledge and what we build with it will keep us secure, but the most wonderful thing about cyber-knowledge is that it is the know-how to build ANYTHING in the digital world, so while keeping us secure, it will empower us to be makers, tinkerers, inventors, spreading the wealth and empowerment of Silicon Valley around the world. What we need — and can have — is a free world filled with communities of HACKERS in the original sense, not criminals but tinkerers, inventors, bricoleurs.

A cyber-aware citizenry would not fall for the completely well-intentioned but ill-informed wolf-crying and bad risk assessments of privacy advocates and regulators underlying pronouncements such as (1) that corporations are the ones principally responsible for cybersecurity, (2) that cybersecurity is principally about protecting personal information, and (3) that consumers are terribly harmed by credit card breaches. Like a mushroom in the dark, the benighted consumer is fed a steady diet of privacy manure without being taught cyber-survival skills. I thought I might have a chance of helping a citizenry become cyber-survivors when I got the chance to help the State of South Carolina mitigate the damage caused by its breach of the tax records of 6.4 million individuals and businesses, dating back to 1998, the largest governmental data breach on record. I thought, Hey, this is South Carolina, where a “country boy can survive.” But the pull of Palinite politics was too powerful; they wanted the Government to keep its dirty hands off their credit monitoring, same as their Medicare.

While I was trying, though, I learned about communities that had succeeded more than I could succeed as a lone breach coach to make the citizenry cyber-aware. One such community was San Diego, which gave birth to Securing Our eCity. Yes, I mentioned it in my last post, too, but want to bring it to your attention now as an incomplete and imperfect vision better than the weak tea of self-perpetuating bureaucracies and regulatory capture of which the President appears, so far, to be capable. It is better because it is about a community coming together for defensive security, and trying to teach inconvenient truths about cybersecurity. And here is a closely related but more broadly focused program that has been around since 2013: “100 Resilient Cities” defines resilience as “the capacity of individuals, communities, institutions, businesses, and systems within a city to survive, adapt, and grow no matter what kinds of chronic stresses and acute shocks they experience.”

These visions are incomplete for our purposes because they are not about teaching ethical hacking, and not about teaching digital innovation beyond ethical hacking. Digital innovation beyond ethical hacking is being taught in hundreds of communities, however, thanks to movements like Fab Labs. Do we need centralized cybersecurity at the national level? Of course we do. But can the President of the United States, relatively free of political constraints in his final term, do millions of times better in articulating a vision for 21st Century cybersecurity in his State of the Union Address on Tuesday than to proffer the weak tea that his advisors have given him and that we already know will fail? In my very humble opinion as just one concerned individual among hundreds of millions, yes.  Finally, which would Dr. King have preferred, the grant of $25 million to historically black colleges to fund cybersecurity education which the President plans to propose on Tuesday or a vision that children of all races and circumstances be enabled to become ethical hackers and digital innovators?


“O ye FTC lovers, ye worshipers at the feet of mobled consent orders, see ye not
what a dim bulb before which ye prostrate yourselves and call it Common Law?”
Please note that we are moving this blog to for reasons that may be entirely stupid, given that you are reading it over here.  Nonetheless, it would probably make sense for you to begin reading it over there, and I would be most grateful for any suggestions on making it better there.  Thank you.
Posted on Sunday, January 11 2015 at 6:29 pm by

Collaborative Cybersecurity for the Internet of Things


The Chairman of the Joint Chiefs of Staff emphasizes today the harm that cyber can do and says that the US has no competitive advantage on the global “level playing field” of cybersecurity, and the President prepares to address cybersecurity in the State of the Union, including resources for training. Meanwhile, the Prime Minister of France declares “war” and political leaders around the world urge vigilance and collaborative action by citizens to detect and prevent terrorism.

In the US, we may be less docile and more divided than we were when “duck and cover” drives in the early 1950s used fear of bombs raining down from the skies to bring the Cold War home, but we are being bombarded with much more powerful messages. War is fought with videos designed to depict murders as terrifying as possible, and global ideological wars are fought (we are told) by a nation against an entertainment company, through a hack, and much more tragically and directly by a wing of a religion against a satirical newspaper. Whatever your beliefs, one message is clear: We need not look to the skies for hypothetical attacks against which we (individuals, companies) hold the weaker hand; real attacks are all around us, particularly on the Internet but also in the physical world around us.

As the Internet comes to control the physical world around us through the Internet of Things, the risks associated with such attacks will grown exponentially. Given the fundamental insecurity of the networked Internet of Things, collaborative public-private partnerships will be increasingly adopted as the means of securing the critical Internet infrastructure, rather than merely expecting all individual organizations to make all the decisions and bear all the costs of their own information security; such a chain filled with weak links endangers the security of the stronger links. We need national equivalents of the Securing Our e-City program (born in San Diego) that incorporate public, organizational and individual efforts.

The Federal Trade Commission is about to come out with its report on the Internet of Things. This report will have little to do with collaborative, public-private cybersecurity, because such collaboration is neither the FTC’s jurisdiction nor the way that it approaches its mission. The report will probably focus on the protection of personal information according to the principles that have dominated the privacy debate since 1973, the FIPs.which impose accountability only on the organization processing the information (in part because they date from a time when an organization’s ability to do so was a reasonable assumption). Those are important issues; many critical infrastructure security issues, however, have little to do with personal information. That may be one reason, whether currently known or not, why the Federal Communications Commission may need to regulate the ISPs at the center of of the public Internet of Things, together, potentially, with components of the private IoT.

No matter what, security for the Internet of Things needs to be adaptive cybersecurity, including both (1) public-private partnerships protecting critical infrastructure, and (2) awareness training based on sound choice architecture for the individual citizen. More fundamentally, security for the Internet of Things will test our ability to collaborate to protect the commons, which is why careful choice architecture needs to be the human side of securing the critical infrastructure.

(Please note new blog address:; & pardon our dust as I fix it up.)

Posted on Wednesday, December 31 2014 at 3:43 pm by

WE’RE MOVING TO DATALAW.NET; 2015: Racing with the machines to protect data assets


NOTE TO READERS: This blog is moving to just to make it a bit easier to find, but something about the move has led me to be silent for months now, thinking about how to launch something better, something more worthy of the extraordinary opportunities and risks that surround us. Indeed, whether we deserve it or not, changes in information technology are giving us lawyers and managers of risk, compliance, ethics, security and privacy our biggest opportunity to change what we do that we have been offered in a long time, and perhaps the most challenging cry for help that we will ever hear.   This post is 2015 predictions that are more likely than most to come true, because it predicts themes on which this blog will focus first to shake up our world.  More importantly to you, right now, I have tried to lace it with explosive ideas with, at or from which your mind can run; I look forward to hearing from you whichever of those three directions you choose.

1.  Wrestling with Tech Angels for What Law Should Do

The rule of law has been a foundation of growth in technology, but the profession of law may be one of the last to grow with technology.  Technology did invent a law, but because technology invented it, it could be neither natural law, judge-made law nor statute.  Building on Alan Kay’s maxim that “The best way to predict the future is to invent it,” the law that technology invented, Moore’s Law, became a law because it turned a surprising observation into a business plan, making it very easy to repeal or replace.

If you want to invent the future, you might do well to ask what people or organizations will need.  The legal profession’s little sparks toward inventing the future can be so isolated that their embers glow for decades.  In 2006, Richard Susskind devoted just a few pages at the end of The End of Lawyers?  to what lawyers could actually do in the future, and we keep seeing their embers in posts on alternative legal careers right up to this moment.   Those little embers will not start a bonfire; they are sustaining innovations rather than disruptive innovations.  They help the ancient guilds of the barristers and solicitors that still dominate the profession of law become a leaner and more techie.

Meanwhile the world on which those guilds perform their ancient rites has become so different, so new, that the question is no longer whether law can keep up with it so much as whether law can even see it at all.   More than half of the assets of even old economy companies are now knowledge assets (and all companies are even losing their insurance coverage for these assets).  Only very rarely can organizations protect their dynamic databases that are central to these knowledge assets with patents, particularly after Aliceand only petrified compilations tend to be eligible for copyright protection, even in Europe with its Database Directive.   Law will deal with the immediate threats posed by technology that are impossible not to see, like the threat of drones flying directly into the paths of commercial airliners that the FAA will probably address soon.  Bigger and more inchoate threats call for new liability rules of the road for this industrial revolution, but in order to get there, we need to log many more nights wrestling with the angels of technology.

2.  Protection as the End, and Governance a Means

In the absence of protection by law or by insurance, organizations must protect knowledge assets themselves; the more organizations understand their knowledge assets, the more they view protection of those assets as critical.  There are two critical means to the critical end of knowledge asset protection: contractual and operational protections.   First of all, data sharing is fundamental to realizing the value of knowledge assets, so data licensing agreements and other contracts governing the creation, ownership, maintenance, use, disclosure, aggregation, return and destruction of data are the foundation of the knowledge economy.

The other critical area of knowledge asset protection is operational or programmatic.  The dominant term for it now may be “information governance” or “big data governance,” and many of its advocates and practitioners mistakenly see it as an end rather than a means to the critical ends of protection of knowledge assets.   It is not just that governance is a nice-to-have and protection a need-to-have.  It is more than that; in an era of ubiquitous, instantaneous, powerful search, governance, having lost its value for retrieval and use,  might have no value at all except insofar as it provides needed protection.

If information governance is a means, not an end, we can abandon a lot of the old methods of information governance either when they lose their value or more effective means to achieve the goal of protection are devised.   If we try to destroy or radically change means that define professions, however, we may have to do battle with professional associations that rise up to defend their turf, be they older professions built on classification systems no longer necessary in this era of search — e.g., archivists, records managers, librarians — or, sometimes, professions as new as privacy (see section 4, below).

3.  Getting Cybersecurity to Stop Chasing its Tail

One profession from which we certainly will not get any push-back against changing methods is cybersecurity, which leads the rest of information governance in incorporating high-velocity search, a good thing not only for the security of personal information but for the secrecy of trade secrets and protection of databases and IP.   It is always adapting, not only as a field but in relation to each particular new threat or vulnerability (as opposed to traditional organizational policies or programs the highest values of which were often consistency of application).   That resilience and superhuman response speed make cybersecurity the big data hero of information governance.   But they are not enough.

The best thinkers in cybersecurity do even more than just constant adaptation to the barrage of attacks hitting their organizations every day; they explore the root causes of those attacks being so numerous and so easy.  For example, Bruce Schneier nailed the wild insecurity and unpatchability of the internet of things (IoT) – that will probably be the root cause of some of the major security incidents of 2015 — in early 2014. Schneier noted that none of the players in the IoT supply chain — not the chip maker, nor the original device manufacturer, nor the brand-name company that adds the user interface — has the incentives, expertise, “or even ability” to patch the software once it’s shipped.

Understanding this issue, one can (in theory) go up the chain — a friendly amendment to Moore’s Law – to the chip maker, and perhaps as Schneier suggests, the ISP is the entity properly incented to do it.  Certainly new IoT regulatory requirements like the FDA security rule for medical devices and the coverage of medical devices now capture stores of patient data by HIPAA create incentives at the device level, too.  And we can hope that the murder of a Vice President on Homeland through hacking his heart monitor framed the issue well enough, given that VP Cheney turned off his new heart’s wifi connection.

In fact, however, even as the payment card industry begins to make the big retailer card breaches a thing of the past through its new standard, the hacking of homes, hearts and other important objects newly connected to the Internet is likely to become next big area of breaches.   Responders will want and in some cases need the high-velocity visibility into the widely dispersed network of the IoT and data loss prevention capabilities that they now have in organizations.  Even if they could get it, though, the cost would dwarf the cost of a secure supply chain, and the results would not be as good (in security, let alone privacy).

This example shows how the two means of knowledge asset protection discussed in section 2 — contractual and operational — converge in the networked world, and why  lawyers and procurement and compliance strategists are sometimes as important to cybersecurity as is the chief information security officer.

4.  Privacy Advocates and Regulators Need to Start Playing with a Full Deck

If cybersecurity is sometimes trapped in an endless cycle of transforming its responses to constantly changing threats almost quickly enough, privacy has in some respects been having the same conversation for more than 40 years.   This point was made very powerfully this year by Professor Chris Hoofnagle, a great historian and advocate who summarized and made available the discussions of the committee whose report created, in 1973, the Fair Information Practices (FIPs, or later sometimes FIPPs).   Anyone who doubts the enduring value of the FIPs even now on the second half of Ray Kurzweil’s chess board should read their concise yet living “basic history” by privacy expert Robert Gellman.   The fact that not only the FIPs but the conversations about the FIPs have not changed much in more than 40 years, however, makes one ask what has been growing outside of the shadow of that oak.

Moreover, there was a moment in early 2014 that  begs for someone not too scared to ask questions about the designer clothes that the emperor appears to be wearing.   The moment was when the very inventor of the incredibly attractive phrase “Privacy by Design,” then-Ontario Information Privacy Commissioner Ann Cavoukian, Ph.D., published with other distinguished authors The Unintended Consequences of Privacy Paternalism.   That paper was not unique in channeling the fury of privacy regulators, advocates and many scholars at what they see — with some reason, even though Fred Cate had been publishing many of the same ideas since at least 2006 — as tech companies’  efforts to undermine the basic principles of privacy; the hardest-hitting (and even more alliterative) may have been Hoofnagle’s  The Potemkinism of Privacy Pragmatism.   The historical irony of Cavoukian’s paper is that her adoption of the term “paternalism” may presage her own Privacy by Design remaining just another Potemkinism; indeed, without (libertarian or soft) paternalism, privacy itself may become Potemkinism.

Let me start with Privacy by Design.  If you believe that you are getting that job done by building free and simple choices about privacy into consumers’ experience of products and services as well as employees’ and citizens’ experience of the choices offered to them, I would tell you that you are playing with a FIPs deck, but not a full deck.  A FIPs deck is designed — to use the terms of the behavioral economics of Kahneman and TwerskyThaler and Sunstein, in a deliberately irritating way — for “econs,” and the full deck for “humans.”  What the full deck has that the FIPs deck does not is “choice architecture.”  Choice architecture might be described as the paternalistic art of changing behavior through anticipating Kahneman’s faster, more emotional and instinctive “System 1,”  or it might simply be described as what advertising is, who Steve Jobs was, or how about the air we breathe?

Because such paternalism certainly would not “qualify” (Gellman:  “While transparency is a classic FIPs principle, neither Privacy by Design nor Simplified Choice qualifies.”), and might be viewed by many privacy advocates and regulators as a violation of Openness or Transparency, when it comes to influencing security and privacy behavior, the people working against stronger security and privacy are generally playing with a much fuller deck than the people advocating stronger security and privacy.   To the extent that Privacy by Design is about improving human behavior in ways that the slower, more deliberative and logical System 2 would entirely favor, it must adopt some degree of paternalism with System 1.   I will be blogging a lot more about what I mean by that, and of course welcome your responses; for today, I just want to bring you one little acorn from outside the shadow of the FIPs oak; perhaps you will want to toss it into your New Year’s bonfire.

Happy New Year!

Posted on Friday, December 12 2014 at 3:54 pm by

Did Your Company’s Email Policy Just Become Invalid?



In 2007, the National Labor Relations Board (the “Board”) in Guard Publishing Co. d/b/a Register Guard held that employees do not have a right under the National Labor Relations Act (“NLRA”) to use an employer’s email system for union-related communications. Since then, employers have had broad latitude to implement and enforce workplace policies restricting employee email use for nonwork purposes. The only significant limitation has been that restrictions on the nonwork use of an employer’s email system must be applied in a nondiscriminatory manner. In other words, an employer could not prohibit only union-related emails and solicitations. The Register Guard ruling was generally based on the Board’s comparison of an employer’s email system to an employer’s bulletin board, copy machines, and public address systems and prior decisions holding that an employer could prohibit nonwork use of such equipment. On December 11, 2014 in Purple Communications, Inc., the Board reversed the Register Guard ruling and held that employees have a presumptive right to use their employer’s email systems for nonwork NLRA-protected purposes.

The Board’s Decision in Purple Communications

In Purple Communications, a union challenged an employer’s rule prohibiting employees from using the employer’s email system for “[e]ngaging in activities on behalf of organizations or persons with no professional or business affiliation with the Company” and “[s]ending uninvited email of a personal nature.” This gave the Board the opportunity to reconsider its earlier ruling in Register Guard. The Purple Communications Board labeled the Register Guard ruling, which was decided by a Republican-majority Board, “clearly incorrect.”

The Board criticized the Register Guard ruling on several grounds. First, the Board argued that the Register Guard ruling gave too much deference to employer property rights at the expense of employees’ NLRA-protected right to engage in concerted activity with respect to terms and conditions of employment. The Board argued that employees’ must be able to communicate with each other in order to exercise their statutory rights, citing the long-standing rule that an employer may generally prohibit oral solicitation at work only on employees’ working time. Second, the Board believed the Register Guard Board “inexplicably failed to perceive” how important email communications are to employees engaging in protected activity today. The Board cited to the increased use of email in the years since Register Guard was decided in 2007. The Board also concluded that email should not be treated the same as other physical company property, such as bulletin boards, copy machines, and public address systems, because the use of email by one employee does not interfere with or hinder another employee’s use of email. Indeed, the Board questioned the validity of all prior cases holding that employers can broadly prohibit nonwork use of company equipment, thus signaling that the Board may be primed to change the rules regarding employee use of those facilities in future cases.

In Purple Communications, the Board held that employers that provide email access to their employees must allow those employees to use email for NLRA-protected activities, including union-related activities, on nonworking time, unless they can demonstrate special circumstances that make a complete ban of nonwork use of email necessary to maintain production or discipline. The Board indicated that “we anticipate it will be the rare case where special circumstances justify a total ban on nonwork email use by employees” and that such special circumstances must be more than theoretical. The Board stated employers can, however, establish uniform and consistently enforced restrictions, such as prohibiting large attachments and audio/video segments, if the employer can demonstrate that such use would interfere with the email system’s efficient functioning. The Board also stated that the availability of other methods of electronic communication – e.g., Facebook, blogging, or personal email accounts – and the availability of face-to-face communication is not relevant and cannot be relied upon to justify a complete ban.

The Board clarified that its ruling in Purple Communications addresses only email systems and the use of email systems by employees, rather than by nonemployees. Additionally, an employer is not required to grant email access to employees if it has chosen not do so. The Board rightly anticipated that its ruling would raise questions about an employer’s right to ensure that employees are using email for nonwork purposes only during nonworking time. The Board clarified that its ruling does not impact an employer’s right to monitor its computer and email systems for legitimate management reasons, such as ensuring productivity and preventing harassment. An employer may not, however, increase monitoring during union-organizing campaigns or focus its monitoring efforts on protected conduct or union activists. The Board also confirmed that an employer can continue to notify employees that it will (or reserves the right to) monitor their email communications for legitimate management reasons and that employees may have no expectation of privacy in their use of the employer’s email system.

Practical Implications

Relying on Register Guard, many employers have adopted or maintained policies prohibiting or limiting employee use of employer email systems for nonwork purposes. Most such policies are now facially invalid, and the maintenance of them is an unfair labor practice under the NLRA, even if not adversely applied to a particular employee. Employers that provide email access to employees should review their email policies and practices to ensure that they comply with the standards set forth in Purple Communications. Any nonwork limitation that infringes on employees’ NLRA rights must be justified by a “special circumstance,” which will be difficult to establish.

In Purple Communications, the Board questioned the validity of all prior cases holding that employers could broadly prohibit the nonwork use of their physical property. In particular, the Board discussed and rebutted at length the justifications for protecting employer telephone systems, suggesting that they could be the next to fall. Thus, employers should also revisit their policies prohibiting nonwork use of bulletin boards, copy machines, telephone systems, and other physical property and consider whether manageable restrictions are possible, rather than complete bans.

Posted on Monday, November 24 2014 at 11:58 am by

CFPB Proposes Rules For Prepaid Products — Including Mobile


By Aaron M. KaslowErich M. Hellmold and  Kevin M. Toomey

On November 13, the Consumer Financial Protection Bureau (“CFPB”) released a proposed rule (the “Proposal”) that would provide significant consumer protections for prepaid accounts through substantial amendments to Regulation E (Electronic Fund Transfer Act) and Regulation Z (Truth in Lending Act). Prepaid products, which are becoming viewed as a bridge between the unbanked and traditional deposit products, are consumer accounts typically loaded with funds by a consumer or by a third party, such as an employer. According to the CFPB, prepaid products are amongst the fastest growing types of consumer financial products in the United States. The Proposal would afford consumers using these emerging products the protections currently provided to existing financial products.

Specifically, the CFPB proposes to define prepaid accounts under Regulation E and Regulation Z to cover a variety of prepaid financial products, including: traditional prepaid cards; mobile and other electronic prepaid accounts that can store funds; payroll cards; government benefit cards; child support; pension payments; student financial aid disbursement cards; tax refund cards; and peer-to-peer payment products.

The Proposal would extend certain existing consumer protections – such as those available to credit card holders, payroll card accounts and certain benefit accounts – to these covered prepaid accounts in an effort to provide (1) easy and free access to account information; (2) error investigation and resolution rights; (3) fraud and lost-card protection; (4) additional disclosures designed to inform consumers and allow them to better compare prepaid products; and (5) an extension of those protections afforded credit card consumers.

Notably, the Proposal would:

  • Prohibit an institution from opening a credit card account or increasing a line of credit related to a prepaid account unless that institution considers the consumer’s ability to make the required payments;
  • Require that financial institutions provide consumers with at least 21 days to repay debt tied to a prepaid account before charging a late fee that is “reasonable and proportional” to the violation of the account terms;
  • Generally require financial institutions to limit fees during the first year following the opening of a prepaid account to 25 percent of the credit limit and prohibit them from increasing the interest rate on existing balances unless two consecutive payments are missed; and
  • Prohibit an institution from opening a credit card account or providing a solicitation or application to open a credit account to the prepaid account customer within 30 days of a consumer registering the prepaid account.

Comments on the Proposal must be submitted to the CFPB within 90 days from publication in the Federal Register.

The following is a link to the Proposal:

The following is a link to the proposed disclosures:

To view a printer-friendly copy of this alert, click here.

Posted on Monday, October 6 2014 at 8:25 pm by

Did You Notice that California’s Data Breach Law Weather Vane Stopped Moving?

1 Weathervane_-_American_Folk_Art_Museum_NYC_-_IMG_5880

Authors: Jon Neiditz and Rishad Patel

California continues to lead the nation in privacy laws about which the nation has not begun to get excited enough.  Jon, for one, feels much safer now that paparazzi drones are prohibited.  And minors can say what they want on social media without scarring their academic records, as long as their parents are willing to leave the district (or move to California) and lie low for a year.

Against this backdrop of new ideas, the bill that had much of the business world scared in “the year of the data breach”—this year’s modification to California’s 2003 world-leading data breach notification statute—turned out not to change the status quo much at all.  A.B. 1710 had a wild youth, shooting arrows at merchants, then banks, then merchants again, but by the time Governor Jerry Brown signed it on September 30, it had grown older and wiser.  Its Bildungsroman tells us a lot about where data breach laws may come to rest.

“Maintainers” of Information Just Have One More “Reasonable Security” Obligation

In its wild youth, A.B. 1710 started out wanting to require “maintainers” of information on behalf of “owners or licensees” of that information (the old formulation from the 2003 law that spread across the country) to get consumer notification obligations just like the ones owners or licensees have.  Yes, in a brave gesture of economic self-destructiveness, the center of the big data, cloud computing universe was going to require all organizations holding information to go directly to their customers’ customers, rather than continuing to require them to tell their customers so that their customers can give consumer notice.

Having put away childish things, A.B. 1710 now just requires the maintainers to  implement reasonable security procedures and practices appropriate to the nature of the information.  Not only were they already to be required by contract to do the same thing, but in most cases the FTC holds them to a very similar standard, so the net-net is no big deal.  Of course, connoisseurs of “reasonable security” know that it exists only in the eyes of the regulator, so the maintainers have a strong interest now in consistent understandings between the California Attorney General and the FTC, which is not always easy.

 “If Any,” Identity Theft Prevention and Remediation Services Must Be Free for a Year

In its wild youth, AB 1710 would have required credit monitoring in response to all breaches. We have been trying to depict this as a crazy idea for years, even though the big retailers keep the craziness going, to the benefit of  lawyers, crisis managers and credit monitoring companies and detriment of stock prices and career longevity.  Having put away childish things, A.B.1710 rightly contemplates “identity theft prevention and mitigation services” for breaches of social security numbers, driver’s license numbers and California ID numbers (i.e., numbers that are hard to change), and does not contemplate such services for breaches of credit or debit card numbers, financial account numbers or medical or health insurance information (where protection can be offered in other ways).  So now you know what to think of  authors of articles telling you that A.B. 1710 was passed in response to highly visible retail breaches like Target and Home Depot (which involved types of information not touched by the new law).

Moreover, the late inclusion of  the magic words “if any” in connection with such services clearly indicates that the law does not mandate such services, but requires that if they are offered, they must be offered at no cost to the affected individuals for at least 12 months.  This is, of course, the standard minimum offering of such services (after which the “identity theft prevention and mitigation services” will begin to upsell unless you require them by contract not to do so).  So contrary to what you read as recently as a few days ago, California did NOT become the first jurisdiction in the world to require such services.

Prohibition on Sales of Social Security Numbers with Business Transaction Exception

The final substantive change introduced by A.B. 1710 is the prohibition on selling, advertising for sale, or offering to sell an individual’s social security number.  The law further prohibits releasing an individual’s social security number for marketing purposes. The prohibition is subject to the exceptions already contained within the bill, such as releasing information for internal verification or administrative purposes and exceptions for healthcare providers.  The final version added an additional exception that will allow businesses to sell or release an individual’s social security number when such release or sale is incidental to a larger, legitimate business transaction.

The wording of the exception – “incidental to a larger transaction and is necessary to identify the individual in order to accomplish a legitimate business purpose” – could be problematic.  While clearly aimed at not hindering legitimate business activities such as a merger, whether a social security number is “necessary” to identify an individual, as opposed to using another type of identification verification, will be open for interpretation and could lead to uncertainty in application.

The Calm Before the Storm?

Is the wind going out of the data breach sails?  Not the encryption sails, to be sure; we are indeed entering the age of encryption.  Yes, one of the sponsors of A.B. 1710 says he is going to fight for stricter notification as well as encryption standards.  As the other sponsor said, breaches are not going away.  What happened with A.B. 1710, however, involved some pointless and expensive overreach going away.  By treating breaches of social security and driver’s license numbers differently than breaches of other sensitive information, California lawmakers have shown that they are beginning to understand that all data breaches, and the underlying sensitive information, are not the same and require different types and levels of attention and remediation.  The next step would be finally facing up to the fact that data breaches of credit card primary account numbers and email addresses need not actually harm consumers or even merchants or banks (except for reissue costs) if handled better.  Be safe, though; don’t hold your breath.

Posted on Monday, September 22 2014 at 1:45 pm by

Why the Privacy Crisis is Just the Tip of the Knowledge Asset Crisis

KM1Source: Grant Thornton LLP 2014 Corporate General Counsel Survey, conducted by American Lawyer Media

1.  Privacy is a much bigger deal now than Scott McNealy ever thought it would be.

Privacy literally went from the basement to the boardroom over the last few years, and is now reportedly the top regulatory concern for general counsels (and boards).  Even more importantly, regulatory and compliance issues do not even rise to the top of the privacy and cybersecurity worries, with customer privacy, “unknown and unidentified risks” and “undetected breaches” among the top concerns:


Source: Grant Thornton LLP 2014 Corporate General Counsel Survey, conducted by American Lawyer Media

2.  But privacy will soon be the least of your information risk management worries.

In this post, you will come to see why the apparent privacy crisis is really just the tip of the information risk iceberg. The elevation of privacy concerns parallels and draw on a bigger and longer-term trend: the ever-increasing valuation of databases, trade secrets and IP.  We call those knowledge assets.

As you may know, intangible assets generally represent about 3/4 of corporate market value, and knowledge assets generally represent about 2/3 of the value of intangible assets now.   Another way to look at that is you have the first quarter of organizational value that is tangible assets, then the intangibles that are brand and employee competencies make up a third of the rest, and that leaves knowledge assets as about 1/2 of the value of all corporate assets:

Knowledge AssetsFor many organizations, knowledge assets are already a bigger overall business issue than privacy and cybersecurity are a risk issue.  Knowledge assets as a percentage of market value have always varied substantially by industry:

Industry Knowledge Assets($Billions) Market Value($Billions) Knowledge Assets as a Share of Market Value
Energy $773 $2,027 38.12%
Software & Services $749 $1,408 53.24%
Insurance & Other Finance $745 $1,914 38.93%
Capital Goods $632 $1,313 48.18%
Pharmaceuticals, Biotech, Life Sci. $532 $1,019 52.17%
Technology: Hardware, Equipment $495 $1,053 47.00%
Food, Beverage & Tobacco $443 $764 57.94%
Media $378 $504 75.07%
Materials $349 $737 47.42%
Healthcare Equipment & Services $348 $650 53.60%
Telecommunication Services $292 $406 71.92%
Retailing $267 $610 43.69%
Diversified Financials $212 $1,074 19.77%
Semiconductors & Equipment $191 $440 43.41%
Household & Personal Products $182 $300 60.82%
Consumer Services $170 $339 50.34%
Food & Staples Retailing $161 $383 41.97%
Transportation $142 $293 48.53%
Real Estate $139 $462 30.10%
Banks $133 $554 23.98%
Automobiles & Components $133 $213 62.26%
Consumer Durables & Apparels $104 $225 46.33%
Commercial & Professional Services $91 $162 56.15%
Utilities $4 $510 0.77%
TOTAL $7,665 $17,360 44.16%

Source:  Kevin A. Hassett and Robert J. Shapiro, “What Ideas Are Worth: The Value of Intellectual Capital and Intangible Assets in the American Economy,” Sonecon, September, 2011.  Based on 2009 industry data from the Bureau of Economic Analysis.  These numbers are of course dynamic; with smart grid, for example, utilities are rocketing upward in percentage of knowledge assets.

The arc of information security has been tracking the increasing focus on knowledge assets.  Gone are the days when organizations could treat data security as principally a compliance issue with a privacy regulatory structure such as HIPAA, GLBA, or EU data protection, or regard its principal focus as preventing the disclosure of personal information. Now cybersecurity is driven principally by global cyberthreats, commercial espionage and the lack of a secure internet, and focused on knowledge assets as well as sensitive (e.g., personal) information.  Moreover, because now all systems are vulnerable and most systems are infected, the focus must be on resilience and adaptability, detection and response in addition to the former focus on protection.   With the bad actors and agents now on the inside, cybersecurity is much more a subtle risk management challenge than a compliance challenge, an area of limited control therefore more suitable than ever for risk transfer through insurance.

3.   [Marylin grabs the Massey prenup and tears it] “Darling, you’re exposed!”

Now here is the kicker:  Just as your organization begins to recognize the value and vulnerability of its knowledge assets and tries to protect them, your insurer — probably drawing on an exclusion the Insurance Services Office issued in 2013 — is in the process of excluding or narrowing all of your coverage of knowledge assets under your comprehensive general liability insurance policy.  To make up for that exclusion, they offer you a cyber-risk policy that only covers breaches of personal information, not theft or loss of knowledge assets.  Generously, in the chart below, personally-identifiable information (PII) is counted as 10% of corporate market value; that still leaves the vast majority of intangible assets uncovered.

Insurance CrisisThis, friends, is the big bottom of the ‘berg, the big uncovered area of knowledge asset protection that you can now only address through suing your insurer before your policy gets the new exclusion (and it is a good time for that), your own work in information governance and knowledge asset protection (our stock in trade), and manuscripted coverage that will become more standard as demand builds.   In upcoming posts, we will share many ideas and lessons learned.  But wait, is he going to end for now with one of those stock iceberg images that so dominate big data posts even now, as big data floats, becalmed, in the Trough of Disillusionment of the Hype Cycle?   No, it’s, it’s…..

Moby_Dick_p510_illustration (1)


Posted on Sunday, September 7 2014 at 6:09 pm by

The New Healthcare Networks — Now


1. The Collision

Technology worlds are colliding now in ways that may give the US healthcare system its last and best chance both to heal itself and to improve health. The collision gives health care institutions an opportunity to remain at the center of US healthcare, but at the center of larger new networks not just of providers and health insurers, but networks that include both medical device manufacturers and 24/7 connectivity to patients who need it. Before this collision, US healthcare has been about curing us, not making us healthy, as the numbers make clear:

OECD Source: OECD Health Statistics 2013,; World Bank for non-OECD countries.

What in particular is costing so much? Care for chronic illnesses and comorbidity (which is the same thing, because it just means patients with more than one chronic condition, who cost up to 7 times as much as patients with only one chronic condition):

chronic Copyright © 2008–2014 Center for Healthcare Research & Transformation

Before the impending collision, there is a world of biomedical technology – generally bigger, heavier, more expensive equipment purchased by provider organizations and devoted to curing medical conditions, and a newer world of health and fitness apps – generally smaller, lighter and cheaper and purchased by consumers.

The role and functions of biomedical technology have been morphing ever-faster in the big data world. Now the technology generally needs to change and learn and produce valuable information as it is used, generally by gathering information that is protected health information (PHI) under HIPAA. So the manufacturer of the technology, previously exempt from HIPAA, now becomes a business associate directly subject to HIPAA’s security standards (We are seeing it happen both in the provider negotiations and in the business plans of the manufacturers.).

As a business associate of its health care institutional customer, the manufacturer can not only get PHI from the customer; it can create and send PHI back to the customer, the provider and potentially the patient. The “velocity” of big data will be most powerful as it generates real-time insights impacting care and health decisions, and for those insights to be effective, experts or expert systems need to be able to properly interpret their messages at the point of care or health decisions.

What of the other tech world of health and fitness apps, thought to be the province of fitness zealots, quantifiers of the self, Tom Wolfe’s “social x-rays,” and other people with more time on their hands than us average Joes (or lazy slugs, your choice)? The biggest problem/opportunity for those apps in solving the woes of our healthcare system is that people with chronic diseases are precisely those who need care when they are living their lives away from the bigger, heavier, more expensive technology, but the early adopters of the health and fitness apps that go anywhere your phone and other sensors go are those with health to burn. And because neither we as a society nor those early adopters as individuals have any really compelling reasons to care about incremental improvements in their health, the health and fitness apps can be and often are tossed as soon as their users tire of them. And being so healthy, what does a fitness app user care if his or her information is breached? Nobody loses insurance or a job due to a BMI of 21 rather than 20.

On the other hand, who needs real-time information 24/7 more than someone trying to manage her or his own chronic disease, or someone trying (or something designed) to improve the health or contain the health care costs of that person? And as the literature of health apps tell us, those apps will keep getting used if someone who cares and is respected by the user is at the other end. The impending collision between the disruptive consumer health tech and the established biomedical tech creates the huge opportunity for the disruptive tech to get to the people who need it and to whom we need to get it. But they need serious information security and privacy….

2. The New Networks

Four factors point the way to a structure of the new health care networks with the provider institution in the center:

  • The selection, interpretation and integration of information flowing both from the biomedical technology manufacturers and from consumer health apps all need an expert, experts and/or expert systems;
  • After all of our unsuccessful tinkering with the health care system, we still trust our doctors;
  • A strong and trustworthy maintainer of the privacy and security of health information is necessary for patients to consent (opt in) to participate in the new programs, which given current and likely future cyber-threats means sophisticated and adaptive security; and
  • Love it or hate it, HIPAA, the privacy/security legal/regulatory structure that was put in place when government had not yet come to a standstill (and therefore not likely to be repealed any time soon), made the provider the covered entity on whose behalf the business associates (here the medical technology manufacturer and connected health apps) create, store, use and communicate PHI.

Here, by “connected health apps,” I mean apps that can connect into the health care system because they can credibly enter business associate agreements. The increasing demand for such apps is leading to the creation of platforms and rules that help enable compliance with those agreements and business associate regulatory requirements by the apps developed on those platforms and in accordance with those rules. If such platforms succeed, then, patients/consumers will be able to trust the security and privacy of the system enough to connect (opt in) to it, and will be connected to someone they still appear to trust – their doctors – through a system strongly incented to maintain trust – a medical institution’s information systems.

Mobile health, biomedical devices, what else? To stimulate debate and thinking — and believe me, it did! — I even advocated health systems taking information from data brokers, because if data brokers become business associates, consumers will (for the first time) have many of the rights — e.g., access, amendment, accounting of disclosures — that those who seek due process in the “scored society” could want.

Being at the center of this collision and these new networks is very good news for US hospitals, in my humble opinion. When digital health leader Dr. Eric Topol said that in 20 years,

Hospitals, except for certain key functions like intensive-care units and operating rooms, will be completely transformed to data-surveillance centers,

I wondered whether he might be too optimistic about hospitals. Why put a data surveillance center in a hospital? Who or what will do the surveillance? The current collision and its aftermath, however, keep the medical system in the middle between the patient, physician and medical technology. That is why I humbly suggest that this collision may give the US healthcare system its last and best chance both to heal itself and to improve health.


Posted on Thursday, August 28 2014 at 5:56 pm by

President Obama Nominates Kilpatrick Townsend’s Danny Marti as U.S. Intellectual Property Enforcement Coordinator


WASHINGTON, D.C. (AUGUST 28) — Kilpatrick Townsend & Stockton announced today that President Obama has nominated firm partner Danny Marti as U.S. Intellectual Property Enforcement Coordinator. Mr. Marti, who is currently Managing Partner of Kilpatrick Townsend’s Washington, D.C. office, would become just the nation’s second U.S. Intellectual Property Enforcement Coordinator by replacing Victoria Espinel who stepped down from the position in August 2013.

The Office of the U.S. Intellectual Property Enforcement Coordinator is dedicated to the protection of the American intellectual property that powers the nation’s economy. The office works to foster and protect the United States’ global competitive advantage, which must discourage intellectual property theft while protecting the constitutional rights of our citizens. The office strives to make sure that the Federal government takes the most appropriate action to realize those goals.

“Danny is an exceptional intellectual property attorney who has been repeatedly recognized by his peers and clients across the country as one of the leaders in his field,” said Henry Walker, Kilpatrick Townsend Chair. “The White House has made a great choice in Danny. The entire firm congratulates him on this outstanding honor.”

“This position plays a critical role in implementing the President’s strategy for protecting some of the most important and powerful assets that help drive the nation’s economy,” said Susan Spaeth, Kilpatrick Townsend Managing Partner. “Having worked with Danny for many years, I have seen firsthand his tremendous leadership skills and he will bring the same intellect, commitment and passion to this position.”

Mr. Marti is the Managing Partner of Kilpatrick Townsend’s Washington, D.C. office – one of the firm’s largest of its 17 offices worldwide. He concentrates his practice on the protection, management, and enforcement of intellectual property assets in the United States and abroad. Mr. Marti advises clients in connection with domestic and international trademark portfolio management, licensing, and other intellectual property-based transactions.

Mr. Marti has represented clients in a wide range of cases involving trademarks, false advertising, unfair competition, copyrights, trade secrets, cybersquatting and computer fraud and abuse matters before various U.S. federal courts, as well as the Trademark Trial and Appeal Board (TTAB) and the World Intellectual Property Organization (WIPO).

Posted on Wednesday, August 27 2014 at 4:10 pm by

NLRB Rules That Employee’s Use of the Facebook “Like” Feature Can Be Protected Activity


On August 22, 2014, the National Labor Relations Board (NLRB) ruled for the first time that an employee who participated in a Facebook discussion of his employer’s income tax withholding practices merely by using the “Like” feature in connection with the discussion was protected by federal law.

In Triple Play Sports Bar & Grille, the owners of a nonunionized restaurant discharged two employees – a waitress and a cook – for their participation in an off-duty, off-site Facebook discussion involving claims that several current and former employees unexpectedly owed additional state income taxes because of alleged accounting errors by their employer. A former employee of the restaurant posted a comment on her Facebook wall stating that the restaurant owners “can’t even do the tax paperwork correctly.” The cook clicked “Like” for this comment. The waitress responded to the comment with a comment of her own, stating that she too owed income taxes and referring to one of the restaurant’s owners with an off-color expletive. Neither the waitress nor the cook participated further in the discussion, although other employees and customers continued with comments, including one characterizing one of the restaurant’s owners as a “shady little man” who probably “pocketed” employees’ money.

The NLRB found the discharges unlawful. In this case, there was no dispute that the initial comment exchange among employees and a former employee about the restaurant owners not being able to “even do the tax paperwork correctly” was concerted activity relating to pay practices under the National Labor Relations Act (“NLRA”). The NLRA gives employees a general right to engage in concerted activity (that is, to act together with other employees) with respect to wages, hours, and other terms and conditions of employment and makes it unlawful for employers to base adverse employment actions on such protected concerted activities. In Triple Play, the NLRB equated merely clicking “Like” during a Facebook discussion to expressing agreement with the particular written comment to which the “Like” designation related and thus granted NLRA protection to the cook. The NLRB also found the waitress’s comment was protected, despite its use of a profane expletive to describe one of the restaurant owners. Significantly, the NLRB found that the waitress’s comment and the cook’s clicking of “Like” on one individual posting in the discussion effectively endorsed the former employee’s original complaint only, and the NLRB held that the discharged waitress and cook therefore could not be held responsible for the other comments posted in the exchange, some of which might not have been shielded by the NLRA. The NLRB further found that the employer’s Internet/Blogging policy, which prohibited employees from “engaging in inappropriate discussions about the company,” could be construed to prohibit the type of protected Facebook posts that led to the unlawful discharges and thus also violated the NLRA.

The NLRB’s decision in Triple Play is noteworthy because it expands the concept of “concerted activity” under the NLRA to include Facebook “Likes” that are inserted in a Facebook discussion of working conditions. It also continues the NLRB’s close scrutiny of and strict approach to social media policies.