Friday, June 9, 2017

Internet Cybersecurity and Data Security

Internet news events are reported daily about computer abuse, hacking, data theft and malware, nationally and internationally.  The concept of cybersecurity, as a term, appears bounced around by writers, scholars, politicians, and news media, short of carefully determining what it encompasses and how cybersecurity relates to the Internet.  Around the world the term is used loosely as well and causing debates. The same can be said of ‘data security,’ it’s sibling.  From an operational aspect of advising clients, a clear understanding is needed of what we are talking about.  It is also important as notice letters are devised to send to the affected public in the event of a cyber incident regarding a data breach, cyber-attack, or cyber theft.  When an entity is developing policies, it is important to define these clearly for the benefit of personnel training, administrative audits, cyber audits, compliance reviews, cloud contracts, data storage agreement, and even securing insurance coverage.  Unfortunately, the terms have been used interchangeably and been misused.  The term ‘cyber’ began to be loosely used after President Obama referred to the subject by using the term ‘cyber’ in seeking to appoint a ‘Cyber Adviser.’  What would have been more appropriate term was ‘data security,’ because the issue was about data and information protection of physical information. Ever since, the terms have been loosely used by academics, in and around state legislatures and as well among members of the U.S. Congress in their usual parlance.  However, operationally, in practice dealing with clients and their issues, the terms should not be dealt loosely and should be termed appropriately.  Not ironing out these terms and their applications will draw countries to not see eye-to-eye on how to cooperate on cyber events, when cyber events have a global impact as did the WannaCry  malware.

This post seeks to clarify the terms to avoid further misuse and mischaracterization of the terms when they are referred to in business and in entity operations, policy implementations, and in legal discussions.  As they are loosely used, they are given the meaning for protecting information from unauthorized access, and that had made some sense.  The failure to distinguish allows for gaps in insurance coverage and misdiagnosed issues in audits and in personnel evaluations for their performance measures.  The same can be said that by the failure to make the valid distinction, appropriate information technology performance is as well misdiagnosed.  In governmental policy circles cybersecurity is the prevailing nomenclature, however, the federal legal provision that addresses cybersecurity is termed as the Federal Information Security Management Act (FISMA).  Among information technology professionals and in select industries, such as in accounting, financial, and in the medical areas, the term referred to as is information insecurity.  Yet that terminology requires clarification because an important consideration is the actual form of of the information. 

As we consider the form of information and its means, we also need to realize the differences.  The Internet and the digital age is here and the information that we derive from digital networks and process means can be termed data, digital documents, digital records, as opposed to physical information.  Once that physical information is digitized, it becomes digital data.  For purposes of addressing systems, networks, and platforms, cyber security is most appropriate.  For purposes of addressing the element of communication, or what is being transferred, sent, stored, or received, data security is most appropriate.  As files are maintained any entity’s concern is appropriately with its network integrity or network security.  Because of the interface of servers being accessed amid multiple users accessing, transmitting, and sharing the data, the practical reference is cybersecurity as it addresses the integrity of the system managing the activities and functions of the digital features of the data.  So, cybersecurity is the macro systemic interface activity of networks, Internet, Intranet, email trunks, remote access relays, and data channels involved in the transmission, storage, and maintenance.  Hence, cyber security is about the system.  The technical application of the term is cyber security involves the technologies, algorithms, software, networks, and devices to protect the amalgamation that comprises the computing system from intrusions and to conduct diagnostics of its security system.

Data security is the process of addressing unauthorized access.  As data-security is applied, the issues discussed cover unauthorized disclosure and access, breach of confidentiality, and misappropriation.  Such characterization gives rise to a focus on the management or administration of data that is transmitted through the system.  This gives rise to the concepts of data hygiene, analytics, and data governance.  The data lives in the system.  Data security is about what is transmitted through the system.   Another way to describe the distinction is that data security involves the interaction of humans, artificial intelligence (AI), encryptiontechnology management, and software processes for the digital realm, in securing and protecting data from breaches within the cyber system.  Essentially, data security is the intended benefit of cyber security or of protecting the system, network or platform.

The Internet function blurs the distinction for many businesses and entities.  This blurring has given rise to debates on how to address information securityprotection of data, data governance, Internet governance, network protection, Internet of Things security, and Industrial Internet of Things security.  The debates will continue even among governments, organizations, and private entities about responding to cybercrimes and addressing Internet governance, vis-à-vis billions of individuals resorting to the Internet for freedom, freedom of expression, pursuit of knowledge, conduct business, transferring and transmitting records, even executing financial transactions.  The terms appear related and they are; but the practical approach to resolving how best to address the critical events faced daily with every cyber incident, requires a clearer distinction.  Until we have a clear understanding, the lag between law being at step with cyber events will widen and the learning curve for employees, managers, corporate officers, and government officials and lawmakers will, as well, continue. 

Lorenzo Law Firm, P.A., copyright 2017

Consumer Privacy versus Data Economy Ecosystem

We all hear about privacy needing protection and we also read about the events that have led to infringements of privacy on the occurrence of data breaches.  Essentially, privacy is desired by all and many believe that it is an aspect of life that is common understanding and it is worth respect and consideration. The courts have recognized privacy’s importance and value in the U.S. [1]  There is no wonder that privacy concerns ring loud with the occurring frequency of cyber-attacks.  To date, there are no signs that the impact that cyber-attacks have on the data economy will decrease.  Amid these concerns, cyber security practices and data security practices are under the microscope of federal and state regulators and industry leaders.  Yet, consumer privacy concerns continue unabated.  The sentiment among clients is that if they do not address, on their end their practices, their liability exposure will be exponential.  This business sentiment has led to the growth of self-regulation embracing consumer privacy concerns and possibly offering an effective response to those concerns in the data economy.

What complicates the matter and rising concern is the continuous daily theft of personal data impinging on privacy countered by the expectation of privacy among individuals.  As well complicating the matter is the sale of personal data in the data market unbeknownst by the consumer or without consumer consent.  The data economy practices of collection, sales, and sharing is argued to be impinging on privacy and the expectation of privacy.  Magnifying this complication is the prevalence of data being sold.  The data market is a lucrative business and it creates opportunities for hackers.  In addition, there are the efforts of state sponsored and independent groups seeking commercially valuable data and personal information for a sundry of purposes that amount to theft even extortion.

Aside from the existing illicit side of the data market, there is the pervasive practice in the data economy amid companies, governments, nongovernmental organizations and numerous entities and groups gathering data on many aspects of human activity and behavior, termed global commons efforts that are deemed beneficial to all.  The benefits, for the business, government, or an entity are enormous.  If the data is analyzed and managed accordingly, the collection of data can result in numerous benefits.  The benefits could include increased sales, personalized marketing, job creation, process efficiencies, enhanced investments and its management, improved accuracy of diagnosis, improve allocation of resources of both personnel and inputs, effective and productive inventories, reduce costs, improve policy effectiveness, manage utility loads and demand, improve security, aiding in background checks, and improve customer relations, and much more.

On balance, there are possibilities present by such a wide-open market of marketed data that exposes the average person to vulnerabilities to identity theft.  Such may include not being able to be insured, get a job, or get credit approved. While technological innovation is advancing, there remains an imbalance between the technical protective measures and policy amid the growing sophistication of intrusive hacking measures.  The plethora of data in the data economy further augments the opportunities for identity theft and wrongful acquisition of personal data.

The U.S. Supreme Court stated in U.S. Dep’t of Justice v. Reporters Comm. for Freedom of the Press,[2] that a person’s privacy is related to the person’s ability to control person information.  Efforts to discourage government’s tracking of web usage have been sought by the federal government.[3]  The FTC has supported the notion of the commercial value of personal name and likeness and its infringement or misappropriation is considered a tort. This tortuous aspect of privacy infringement is more so evident as it relates to Internet users where the FTC, by Section 5 of the FTC Act, pursues privacy violations.[4]

Where the expectation of privacy is crossed by ill-noticed practice of data disclosure, sales, and sharing, the FTC finds this scenario as a deceptive practice subject to its authority to investigate and sanction. Social media platforms, such as Facebook, has had issues with this concern where its privacy measures were lessened which allowed for easier disclosure of friends lists to third-parties.[5]  This easing of access of friends list to third-parties is an aspect of the data economy activities that arguably impinges on the expectation of privacy. With the continuing growth of Internet business, E-commerce, and digital transactions (Bitcoin) the data generated with each click of a mouse, touch on a mobile device, or swipe lead to hundreds of billions of dollars in revenues in advertising, manufacturing, and sales.

Privacy versus the data economy concern has not been ignored despite the vicious cycle of benefit, compromise, cyber vulnerabilities, with all the active forces mentioned involved in the data economy ecosystem.  Industries have organized to propose notions of self-regulation to fill the void of slow government efforts and inability to keep abreast of innovation.  The industry entities across the economy have emphasized their privacy policies and instituted forms of common practices deemed as measures addressing the need to enhance privacy and the management of data collected.  Additionally, organizations are deeming themselves bound by their own privacy policies. The “do not track” approach by government is one such measure sought to allow consumers to exercise discretion over personal information.  Industries, in the spirit of self-regulation, have responded to the concerns by proposing opt-out vehicles where the selection is taken to limited ads, control how browser retains Internet usage and the cookies sites execute, and to control location-based-services.

The balance between consumer privacy and the data economy can best be seen through industry efforts, their instituted policies and practices as the first line of defense along with their training of personnel and their application of data governance auditing.  In practice, industries may positively contribute through their self-regulation initiatives as they may be best suited to respond to the intrusive innovations that compromise data and best suited to develop measures to allow for more consumer control over their data. The idea of self-regulation and not government imposed mandates, addresses the balance and also provides a way so that advertising revenue is not impinged.  Seldom acknowledged is the availability of the Internet is fostered and populated by material that is supported by ads, otherwise the Internet would be a costly endeavor for anyone to access and use.  The privacy concerns are far from being assuaged.  Yet, the active participants in the data economy ecosystem are possibly best suited to forestall any gains by cyber attackers and the harms they impose from data breaches.

www.lorenzolawfirm.com
http://lorenzolawfirm.com/consumer-privacy-versus-data-economy/
Copyright 2017

Artificial Intelligence Liability

Liability, as an issue, seldom arises in common conversations.   When discussions in the work place occur, liability is not on the top of the list of issues.  Yet, there are a plethora of law firm ads about personal injury claims, insurance commercials, and medical malpractice issues.  From watching and reading ads you are left with the opinion that injury claims and liability are all too common.  Coupled with this prevalence of personal injury claims and medical claims is the novel aspect of technological innovation that is utilized in the medical profession and in the delivery of services in many industries that include data management, cloud computing, software design, and data analysis.  What if something goes wrong? What if the conclusion leading to the delivery of service was incorrect?  What if the data was not categorized or coded accurately leading to a data breach? It is reasonable to wonder if now a new vein of ads and claims will arise as artificial intelligence (AI) is increasingly incorporated in the delivery of many types of services.  Can you fathom a robot conducting surgery on your spleen or knee cartilage?  Well to the amazement of many, those pacemakers are run by codes that monitors and assesses and provides feedback on your heart.  The data derived can be used to suggest replacement treatment or medication. Diagnostics are run by a system of culled data that result in the predictive assessment of the best-concluded treatment, medicine, or procedure. The benefits are increasing with every step of innovation. Yet, there is always room for error, including diagnostic error, procedural error, prescription error, data mismanaged or incorrectly transferred.  Many challenges remain in assessing liability with the use of AI.

With all this innovation and possibilities, how do we regard responsibility and how do we weigh liabilities?  How do we assess risk and balance with what can be insured?  The use of machine learning through the execution of algorithmic formulas lends to some difficulty.  It is difficult to open a formula and have it dissect to determine what led to an incident.   We know that the result is drawn by inputs.  The inputs are drawn from data that is culled, categorized, and identified as relevant on a scale, so-to-speak.  Algorithms are not reviewed, though their results could benefit millions or hinder one skin cancer patient, an airplane pilot or the assets of a Fintech firm’s portfolio.  The barrier in algorithms is their proprietary trappings.   Algorithms and their design are considered proprietary; as such, they are not open for scrutiny or evaluation, but for by their own designer or designer team.  But the designer could very well be a bot.  That bot is as well processing based on inputted data selected by someone.  The complication to discerning liability is becoming clearer.

Could AI be the turning point where liability will be reduced for doctors, data managers, cloud service operators, pharmaceutical companies, medical researchers sued by investors? With all the potential benefits of AI in the delivery of a multitude of services over a span of industries, how is responsibility reconciled?  Should their be liability?  The advance of technology in AI has brought diagnosis on the spot, efficiencies in production, efficiencies in the allocation of resources, made medical services more specific to the person and made cars more responsive.  Where liability is triggered is when the human element factors in.  Can we sue a robot or its designer?  After all, doctors are expected to assess their use of AI for their delivery of services and their diagnosis.  If an automobile or train malfunctions, according to the TV ads, we can sue the manufacturer or even the manufacturer of the component used in the car.  We can sue a pesticide manufacturer for failing to notice the general public of the risk of their product and for failure to provide instruction on uses and protective measures to take when using the product.  Could the same be applied to AI, algorithms, and software operating robots designers?  The answer is not that simple because it is not that easy to find the source.

We are left then with the involvement of machine learning determining the future product, result, conclusion, process, etc., of what consumers, patients, and patrons, receive.   AI, software, and robots are not designed in a vacuum.  They take years and numerous participants and many beta assessments.  To align liability to the designer, then, take your pick.  To align liability to the company owning the software or robot, then again, take your pick among the many involved in the development.  The downside of this exercise is that if the researchers, programmers, code writers are placed in question and held subject to liability, innovation will be stifled.  Such innovation is exponentially growing in influence in every field you can imagine.  But the benefits are strengthened by how we discern responsibility for the trust in the airplane's flight trajectory, in the surgical procedure and specific location of the cancer, in the industry data leading to shifts in market investments, and in the composition of a particular ingredient in a pesticide.

Moreover, could there be an argument for applying strict liability to the algorithms, software, and robotic process?  It is commonly known that consumer products are tied to the liability factors of strict liability where companies are held responsible for their products malfunction. As humans are fallible, fallible humans design algorithms, software, robots, AI; hence, there is the possibility of the fallibility of an algorithm or software process.   Could there also be appropriate to borrow from the pharmaceutical field the term “unavoidably unsafe” product?[1] This application could be used if we have the circumstance of a product that is not free of issues.

Risks are always present.  Could “unavoidably unsafe” product doctrine be applied to AI? As practitioners, we assess risks and we acknowledge that certain products have risks.  AI, software design, the robotic process can be assessed but what results is that there will be discovered a number of beta testing and calibrations, that will make finding responsibility difficult because reasonable measures were taken.  Industry standard practice will be the setting.  In addition, if the reasonable notice was provided about the product’s risk there cannot be the supportable argument of failure to warn.  But more specific to AI, when one conceives of the numerous data inputted to the process of the product or software before being delivered, or the medical procedure being done, and the amount of testing and assessment before it is deployed, there could not be a saleable argument of failure to test.  Furthermore, consider the advisory about the need for frequent updates to address potential glitches, vulnerabilities, or detected malfunctions.  Who should be responsible for the updates and bear the responsibility for the harm if the update was not executed on a software monitoring a pacemaker or calibrating the diagnostics of a robot?

The challenges remain for attributing liability with the use of AI.   Data is not easy to get, especially reliable data that is specific to the need or service to be delivered.  The other challenge is timely and appropriate training and development because not all devices work in sync with one another. The search for a legal remedy for discerning liability where AI is the byproduct leading to the result that gave cause to the potential action continues. The trust of the patient, patron, and the consumer is contingent on results and the possibility of redress when humans relied on AI.

[1] Second, of Torts § 402A, (1965).

www.lorenzolawfirm.com/
copyright 2017