Monday, September 26, 2016

Internet Speech Immunity

Internet speech immunity exceptions are sought frequently by individuals and businesses who are affected by someone else’s comments about them via an online site.  Online sites are today’s marketplace of ideas to enhance the “competition of the market.”[1]  And so, the claim usually asserted is a defamation claim directed to the website from where the content of the comment is displayed.  The assertion is underscored by a belief that the website is responsible for the publication of the statement either slander as it is conveyed verbally in a video or libel in written form displayed on a website.  The issue presented by the amount of social interaction freely exchanging views which may be directed at a particular entity or person is to determine the balance between freedoms of speech, i.e., protected speech and unprotected speech.  The element of having a harmful effect may or may not be pertinent in light of the level of publicity of the plaintiff, the truthfulness of the libel or slander, the public import of the statement, and the political value of the statement rendered to the discourse.
Amid the plethora of defenses that include truth, privilege, lack of malice, illegality, there is the social import or political value defense known as Anti-SLAPP.  The classical meaning of the acronym is to address the events that lead to a strategic lawsuit against public participation (SLAPP).  Anti-SLAPP was garnered by states to address the need to foster free speech and discourse, either in petition form or just free speech rights. The belief is that in the marketplace of ideas, with ideas being exchanged, an element of truth arises.  The expectation is that the process of openness of exchanges will bring to light incorrect conceptions.  The opposition to any light arising is based on this fear that their views may be rendered weak or incorrect in society; hence,  they seek to silence discourse and potential dissident views.
The merit behind the Anti-SLAPP promulgation was to essentially reduce the number of frivolous lawsuits.  Such suits would be driven to prevent or censor the speech or the activity of public display.  The concern among judges and lawyers is that a SLAPP action is always the case responding and opposing an exercise of free speech.  The SLAPP vehicle may be instrumental to challenge a lawsuit that seeks to silence free speech, especially if the targeted speech is one of public import or political value, even from the media.  But, when the targeted speech attempts to convey falsehoods to the public about a private person, the speech loses its protection.
However, the other concern is when the site is used as a platform to organize activity aimed at harming other people, equivalent to using the postal service.  Groups seeking to commit crimes against others, as in the facts described in Fields v. Twitter, use online platforms to carry out their plans.  The idea is that if the platform prohibited such communications, that act, and its involved communicated organization would have been prevented.  That expectation of monitoring conduct touches upon “policing” issues and “privacy” issues that are beyond the scope of this post.
This previous concern leads into the consideration of when a site is used to voice negative comments about someone or a business and it is claimed to be the cause of  harming someone’s social and business reputation.  The argument asserted is that the site could have prevented the comments from posting trying to apply Section 230 under the Communications Decency Act (CDA).  The claim then seeks to establish that the online site is none other than a publisher and should be held responsible, especially when the comments could be fabrications used by the online site.  This was the tone of the claims and discussion in Kimzey v. Yelp.
What stands out in Kimzey is the angle that transcends from allowing a statement or comment to be displayed towards seeking to establish that the comment was a fabrication and that it was instrumentally contrived by the online site itself.  The argument goes that the online site authored the review and used it as a marketing means.  The court stressed that arguing the potential falsity of a comment or review does not lend itself to disallow the online site’s immunity.  Furthermore, any assessment drawn by the online site to evaluate the comment by users is based on user comments providing the information or data that essentially provides the online site to evaluate the comment and establish a measuring or rating of the comments.  While there is a measure of discretion in setting the measuring, it is the users that provide the information that aids the Internet site’s grading of the comment pertaining to the subject who is claiming defamation.
The world of the web is here to stay and will be a part of our lives forever, especially as Internet law evolves.  As we continue to interconnect via mobile apps and the Internet our voices carry with a broader effect.  The uses of Anti-SLAPP to address silencing speech efforts or defamatory claims or the need to resort to Section 230 to immunize an online site from a defamatory claim for comments displayed on its platform are all instrumental in enhancing communication exchange in society.  As Oliver Wendell Homes, Jr. coined in his dissent in Abrams v. the United States, “The ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market.”
[1] Quoted phrase of Justice Oliver Wendell Homes 1919.
Lorenzo Law Firm is “Working to Protect your Business, Ideas, and Property on the Web." Copyright 2016, all rights reserved Lorenzo Law Firm, P.A.

Monday, September 19, 2016

Data Security Practices

Data security practices are increasingly becoming a theme among management and employees in the administration of business and their daily work process.  The common element in data breaches is the element of human negligence, training or the underestimation of needed attention.   The extent that incidents are occurring, information technology personnel and operations personnel are finding themselves needing to collaborate more frequently than ever.  This change in organizational administration is occurring at levels of state and federal government and in the private sector involving banking, legal, accounting, and insurance industries.  The effort, one may say, is to enhance the quality or rigor of data security employed by the particular entity.  Data breaches are not all the same but they do have one commonality in that the goal of the subject is always confidential data (CD), personal health information (PHI), electronic personal health information (ePHI), or personal identifiable information (PII).  Regardless of the nomenclature used to define what is at the stake, the rigor or quality of the efforts is beginning to be questioned by entities originally deemed to not have authority to engage in this type of scrutiny.  We have blogged on the needed care entities should take to ensure that their representation about their data security practices is not exact.  When an entity claims a certain level of data security method, the Federal Trade Commission (FTC) could very well deem it a deceptive business practice.
The care that should be employed to attend to data handling and transfers is not to be overstated to avoid a finding similar to Dwolla, Inc.’s experience regarding the representations.  One matter that comes to mind in this post is the case involving LabMD, Inc.  LabMD is a company that administered a medical laboratory which among its services also provided cancer screening detection services for physicians. The FTC had filed in 2013 a complaint against LabMD.  In its complaint, it alleged that LabMD is  subject to Section 5 of the FTC Act because it misrepresented its actual data security efforts.  LabMD’s practice was questioned.  The FTC argued that LabMD did not exert reasonable efforts to secure data of personal information and that its networks were not attended to as they should. The FTC commenced its investigation as a result of data security incidents at LabMD.   There was a tossing and turning exchange in this matter between the three, i.e., the FTC, the 11th Circuit, and LabMD.
From the initial complaint that commenced in Georgia District Court, it was noted that LabMD patient information available on the Internet.  The data or electronic personal health information (ePHI) was actually searchable in a peer-to-peer network.   LabMD was facing claims that it failed to prevent unauthorized disclosure  of ePHI.  Its motion to dismiss was unsuccessful, despite its strong argument.  LabMD argued that the FTC did not authority.   The FTC had filed an administrative case in District Court in Georgia.  The District Court had to determine if indeed the FTC had a say over the handling of ePHI.
The toss thereafter was when the district court denied LabMD’s motion to dismiss and LabMD proceeded to appeal to the Eleventh Circuit Court of Appeals.  The turn was not only that the Eleventh Circuit ruled against LabMD’s appeal.  The turn was that it was hoped that the Court would opine on the FTC’s enforcement authority.  What we learned is that the Circuit Court determined that there was an administrative remedy step that was required.  The Court ruled that LabMD had remedies to exhaust[1] before it was able to engage on the issues presented.  The resulting turn at that stage was an administrative law lesson.
What proceeded thereafter was as expected.  An administrative proceeding ensued where an ALJ determined that harm was not demonstrated.   The ALJ reasoned that short of finding harm, the Commission did not meet what FTC Act Sec. 5 required.  The FTC subsequently reversed the ALJ’s determination.  In its ruling, the FTC stated that LabMD did not exert the best efforts to secure the data.  It found that it did not monitor how the files were handled and it did not employ a system to detect if there was an intrusion.  These steps were deemed to be basic means of protecting confidential data. The FTC concluded that LabMD’s conduct posed an ‘unfair act’ for the public to trust and it was inconsistent with FTC Act Sec. 5.  What is noteworthy, is that nowhere in Section 5 of the FTC Act does it provide the FTC authority to address the need to protect medical records or maintain their privacy.
Nevertheless, the FTC proceeded to conclude that the disclosure was tantamount to harm because of the neglect that LabMD did by not training employees adequately on handling medical records, and not monitoring its firewall.  LabMD’s conduct resulting in disclosure of medical personal information is a substantial injury under Sec. 5 of the FTC Act.  The FTC did not look into whether the information was used in the open market.  The FTC just looked at the fact that there was an unauthorized disclosure of PHI.  LabMD is now obliged to implement a “CISP” (comprehensive information security program) and become proactive to inform individuals and conduct frequent audits.  The key point to note about this matter that resulted with the FTC suing LabMD is that harm may just be more about the unauthorized disclosure due to neglect than the harm actually experienced by the individuals and that the FTC is exerting a greater authority than originally conceived it had regarding data security.
[1] Administrative Procedure Act - 5 U.S. Code § 704 - Actions reviewable
Lorenzo Law Firm is “Working to Protect your Business, Ideas, and Property on the Web." Copyright 2016, all rights reserved Lorenzo Law Firm, P.A.

Website Crawling and Data Scraping Thoughts

Website crawling and data scraping have burdened the growth of e-commerce as website owners are witnessing their data scraped.  The legal questions have lingered.  Many questions stand out.  The prevalence of crawling and scraping has become too of the norm for those using web content for business, research, or marketing purposes. The common theme is that website scraping is used by those who are seeking a short cut in order to catch up to their competition, seeking to emulate their competition, or are seeking to extract information that would otherwise involve too much time.  The crawling can be useful for enhancing search relevance, indexing, and accuracy.  The software used is not unique.  It could be automated just to extract information similar to what search engines do plus do an additional feat by converting the data useful within a database.  The data being sought can be extracted from many types of sources.  As it could be used by potential newbie business desiring to start at some equal footing, they could seek to get their data from booking websites, yelp, eBay, or even a directory.  The potential scrapers can seek to go after a business they desire to emulate.  The purposes for which website scraping is pursued gives “big data” gathering a new image with unsavory impressions.
The reality is that the Internet is not without the existence of web crawlers and scrapers which are instrumental to the analysis of website performance in sync with search words and traffic volume measuring.  Yet, the method of using web crawlers to either aggregate news content or enhance the relevancy of search result has drawn attention to the legal consequences and the legal issues they cause.  To the scrapped website business, the potential arguments could very well be from the spectrum of a violation of the website’s terms of use to the occurrence of computer abuse.  Between them is a list of legal considerations which include trespass to chattel, copyright infringement, trademark infringement, and unauthorized access to computer information all in the name of online data collection for better or for worse.  The for worse consideration embraces conceptions of a software application tasked to collect online data through scripts, also known as “bot”, and the depicted analysis of the data. These bots give the impression of human actual online interaction.  Nevertheless, the legal questions and impact of a bot’s online website data scraping work are diverse.
Among the legal issues is the issue raised regarding the violation of the terms of use that are stated on websites that prohibit the scraping and crawling essentially copying of the respective websites content and data.  The argument can embrace the notion of contract by which if one uses or visits the website there is the understanding that visitors are bound by the terms of use (ToS) of the website.  Doing such an act that violates the ToS of a website, construes a breach of contract argument, without going into the details – in this short note – addressing the aspects of “clickwrap” and “browsewrap agreements.  Both hinge on informed consent and the means of expressing a user’s consent and the user’s clear ‘constructive knowledge’ vis-à-vis the prominence of the ToS on a website.  The glaring prominence of the ToS and conditions for a website user to be aware are pivotal to establish a breach of terms of use.  The actual event of reading the terms is immaterial.  However, the same cannot be said when the crawling or scraping is done by a bot that is not scripted to read and consent to a website’s ToS.  The means by which this is technically done skirts the legal elements of ‘consent’, ‘constructive knowledge’, and ‘prominent and clear notice’ that are required to establish a form of breach.  The arguments hovering on prohibiting uses of a website have reached the point of discussing commercial and personal uses, with the former being the one restricted and prohibited by the ToS.
In addition to the ToS concern, there is the copyright infringement issue with website scraping data and content.  The ultimate question is to determine which aspects provides the best argument.  The Copyright Act seeks to protect the expressions whether they be in a visibly readable form or in a digital form on a server.  The Copyright Act may not be effective in addressing or preempting the use sought to be addressed by the website owner.  For instance, if the crawling and scraping are not done for commercial purposes, the Copyright Act may not yield the leverage necessary.  Yet, Facebook’s case against Power.com which was underscored by the Copyright Act was effective in that the defendant was aggregating Facebook’s data unto another site and that was in violation of Facebook’s terms.  The Northern District Court of California denied defendant’s motion to dismiss determining that scraping involves the copying that Facebook explicitly restricts in its ToS.
Aside from the copyright infringement issues, there are considerations that scraping a website or crawling a website against the owner’s ToS is tantamount to unauthorized access or exceeding the permitted use of a website and its content.  Such a view resorts to the Computer Fraud and Abuse Act (CFAA) that points to the unauthorized access of a computer system and also points to exceeding the scope of use that is permitted.  The use of a website must have exceeded what was authorized coupled with an express and clear statement on the website of what was a prohibited use or activity on the website regarding its content and data. Conjoined with this consideration is the often articulated defensive crutch of ‘fair-use’.   Yet, scraping website content does not inherently engender to be the beneficiary of the ‘fair-use’ argument.
Furthermore, web crawling and scraping bring as well the concerns for determining the existence of damages if website content and website data is considered as ‘chattel’.  As argued by eBay against Bidder’s Edge, the website platform content and data was argued to be chattel to which Bidder’s Edge trespassed.  eBay also argued that the defendant’s act interrupted eBay’s operation.  However, the effectiveness of the argument must rely on the existence of damages.   Without damages, the argument withers and courts do not see trespass to chattels as a workable argument against website scraping and crawling.  A frequently used argument against web crawling and scraping is the Digital Millennium Copyright Act (“DMCA”) which resorts to restricting fair-use of content. What is interesting is the actual bypassing that takes place to circumvent a website’s measures to restrict web crawling and scraping. The DMCA provides an enforcement means for copyright rights of a websites digital content.
The complexity created by the use of bots is elusive and evident.  Also evident is that the fair use defense along with the absence of damages and the potential absence of the element of consent and constructive knowledge will continue as points of contention, as website owners oppose web scrapers.  The legal issues thus far have crossed from intellectual property and contract concerns to unauthorized access to a network or computer system, raising the specter for continued legal disputes over website scraping.
Lorenzo Law Firm is “Working to Protect your Business, Ideas, and Property on the Web." Copyright 2016, all rights reserved Lorenzo Law Firm, P.A.