Friday, June 9, 2017

Consumer Privacy versus Data Economy Ecosystem

We all hear about privacy needing protection and we also read about the events that have led to infringements of privacy on the occurrence of data breaches.  Essentially, privacy is desired by all and many believe that it is an aspect of life that is common understanding and it is worth respect and consideration. The courts have recognized privacy’s importance and value in the U.S. [1]  There is no wonder that privacy concerns ring loud with the occurring frequency of cyber-attacks.  To date, there are no signs that the impact that cyber-attacks have on the data economy will decrease.  Amid these concerns, cyber security practices and data security practices are under the microscope of federal and state regulators and industry leaders.  Yet, consumer privacy concerns continue unabated.  The sentiment among clients is that if they do not address, on their end their practices, their liability exposure will be exponential.  This business sentiment has led to the growth of self-regulation embracing consumer privacy concerns and possibly offering an effective response to those concerns in the data economy.

What complicates the matter and rising concern is the continuous daily theft of personal data impinging on privacy countered by the expectation of privacy among individuals.  As well complicating the matter is the sale of personal data in the data market unbeknownst by the consumer or without consumer consent.  The data economy practices of collection, sales, and sharing is argued to be impinging on privacy and the expectation of privacy.  Magnifying this complication is the prevalence of data being sold.  The data market is a lucrative business and it creates opportunities for hackers.  In addition, there are the efforts of state sponsored and independent groups seeking commercially valuable data and personal information for a sundry of purposes that amount to theft even extortion.

Aside from the existing illicit side of the data market, there is the pervasive practice in the data economy amid companies, governments, nongovernmental organizations and numerous entities and groups gathering data on many aspects of human activity and behavior, termed global commons efforts that are deemed beneficial to all.  The benefits, for the business, government, or an entity are enormous.  If the data is analyzed and managed accordingly, the collection of data can result in numerous benefits.  The benefits could include increased sales, personalized marketing, job creation, process efficiencies, enhanced investments and its management, improved accuracy of diagnosis, improve allocation of resources of both personnel and inputs, effective and productive inventories, reduce costs, improve policy effectiveness, manage utility loads and demand, improve security, aiding in background checks, and improve customer relations, and much more.

On balance, there are possibilities present by such a wide-open market of marketed data that exposes the average person to vulnerabilities to identity theft.  Such may include not being able to be insured, get a job, or get credit approved. While technological innovation is advancing, there remains an imbalance between the technical protective measures and policy amid the growing sophistication of intrusive hacking measures.  The plethora of data in the data economy further augments the opportunities for identity theft and wrongful acquisition of personal data.

The U.S. Supreme Court stated in U.S. Dep’t of Justice v. Reporters Comm. for Freedom of the Press,[2] that a person’s privacy is related to the person’s ability to control person information.  Efforts to discourage government’s tracking of web usage have been sought by the federal government.[3]  The FTC has supported the notion of the commercial value of personal name and likeness and its infringement or misappropriation is considered a tort. This tortuous aspect of privacy infringement is more so evident as it relates to Internet users where the FTC, by Section 5 of the FTC Act, pursues privacy violations.[4]

Where the expectation of privacy is crossed by ill-noticed practice of data disclosure, sales, and sharing, the FTC finds this scenario as a deceptive practice subject to its authority to investigate and sanction. Social media platforms, such as Facebook, has had issues with this concern where its privacy measures were lessened which allowed for easier disclosure of friends lists to third-parties.[5]  This easing of access of friends list to third-parties is an aspect of the data economy activities that arguably impinges on the expectation of privacy. With the continuing growth of Internet business, E-commerce, and digital transactions (Bitcoin) the data generated with each click of a mouse, touch on a mobile device, or swipe lead to hundreds of billions of dollars in revenues in advertising, manufacturing, and sales.

Privacy versus the data economy concern has not been ignored despite the vicious cycle of benefit, compromise, cyber vulnerabilities, with all the active forces mentioned involved in the data economy ecosystem.  Industries have organized to propose notions of self-regulation to fill the void of slow government efforts and inability to keep abreast of innovation.  The industry entities across the economy have emphasized their privacy policies and instituted forms of common practices deemed as measures addressing the need to enhance privacy and the management of data collected.  Additionally, organizations are deeming themselves bound by their own privacy policies. The “do not track” approach by government is one such measure sought to allow consumers to exercise discretion over personal information.  Industries, in the spirit of self-regulation, have responded to the concerns by proposing opt-out vehicles where the selection is taken to limited ads, control how browser retains Internet usage and the cookies sites execute, and to control location-based-services.

The balance between consumer privacy and the data economy can best be seen through industry efforts, their instituted policies and practices as the first line of defense along with their training of personnel and their application of data governance auditing.  In practice, industries may positively contribute through their self-regulation initiatives as they may be best suited to respond to the intrusive innovations that compromise data and best suited to develop measures to allow for more consumer control over their data. The idea of self-regulation and not government imposed mandates, addresses the balance and also provides a way so that advertising revenue is not impinged.  Seldom acknowledged is the availability of the Internet is fostered and populated by material that is supported by ads, otherwise the Internet would be a costly endeavor for anyone to access and use.  The privacy concerns are far from being assuaged.  Yet, the active participants in the data economy ecosystem are possibly best suited to forestall any gains by cyber attackers and the harms they impose from data breaches.

www.lorenzolawfirm.com
http://lorenzolawfirm.com/consumer-privacy-versus-data-economy/
Copyright 2017

Artificial Intelligence Liability

Liability, as an issue, seldom arises in common conversations.   When discussions in the work place occur, liability is not on the top of the list of issues.  Yet, there are a plethora of law firm ads about personal injury claims, insurance commercials, and medical malpractice issues.  From watching and reading ads you are left with the opinion that injury claims and liability are all too common.  Coupled with this prevalence of personal injury claims and medical claims is the novel aspect of technological innovation that is utilized in the medical profession and in the delivery of services in many industries that include data management, cloud computing, software design, and data analysis.  What if something goes wrong? What if the conclusion leading to the delivery of service was incorrect?  What if the data was not categorized or coded accurately leading to a data breach? It is reasonable to wonder if now a new vein of ads and claims will arise as artificial intelligence (AI) is increasingly incorporated in the delivery of many types of services.  Can you fathom a robot conducting surgery on your spleen or knee cartilage?  Well to the amazement of many, those pacemakers are run by codes that monitors and assesses and provides feedback on your heart.  The data derived can be used to suggest replacement treatment or medication. Diagnostics are run by a system of culled data that result in the predictive assessment of the best-concluded treatment, medicine, or procedure. The benefits are increasing with every step of innovation. Yet, there is always room for error, including diagnostic error, procedural error, prescription error, data mismanaged or incorrectly transferred.  Many challenges remain in assessing liability with the use of AI.

With all this innovation and possibilities, how do we regard responsibility and how do we weigh liabilities?  How do we assess risk and balance with what can be insured?  The use of machine learning through the execution of algorithmic formulas lends to some difficulty.  It is difficult to open a formula and have it dissect to determine what led to an incident.   We know that the result is drawn by inputs.  The inputs are drawn from data that is culled, categorized, and identified as relevant on a scale, so-to-speak.  Algorithms are not reviewed, though their results could benefit millions or hinder one skin cancer patient, an airplane pilot or the assets of a Fintech firm’s portfolio.  The barrier in algorithms is their proprietary trappings.   Algorithms and their design are considered proprietary; as such, they are not open for scrutiny or evaluation, but for by their own designer or designer team.  But the designer could very well be a bot.  That bot is as well processing based on inputted data selected by someone.  The complication to discerning liability is becoming clearer.

Could AI be the turning point where liability will be reduced for doctors, data managers, cloud service operators, pharmaceutical companies, medical researchers sued by investors? With all the potential benefits of AI in the delivery of a multitude of services over a span of industries, how is responsibility reconciled?  Should their be liability?  The advance of technology in AI has brought diagnosis on the spot, efficiencies in production, efficiencies in the allocation of resources, made medical services more specific to the person and made cars more responsive.  Where liability is triggered is when the human element factors in.  Can we sue a robot or its designer?  After all, doctors are expected to assess their use of AI for their delivery of services and their diagnosis.  If an automobile or train malfunctions, according to the TV ads, we can sue the manufacturer or even the manufacturer of the component used in the car.  We can sue a pesticide manufacturer for failing to notice the general public of the risk of their product and for failure to provide instruction on uses and protective measures to take when using the product.  Could the same be applied to AI, algorithms, and software operating robots designers?  The answer is not that simple because it is not that easy to find the source.

We are left then with the involvement of machine learning determining the future product, result, conclusion, process, etc., of what consumers, patients, and patrons, receive.   AI, software, and robots are not designed in a vacuum.  They take years and numerous participants and many beta assessments.  To align liability to the designer, then, take your pick.  To align liability to the company owning the software or robot, then again, take your pick among the many involved in the development.  The downside of this exercise is that if the researchers, programmers, code writers are placed in question and held subject to liability, innovation will be stifled.  Such innovation is exponentially growing in influence in every field you can imagine.  But the benefits are strengthened by how we discern responsibility for the trust in the airplane's flight trajectory, in the surgical procedure and specific location of the cancer, in the industry data leading to shifts in market investments, and in the composition of a particular ingredient in a pesticide.

Moreover, could there be an argument for applying strict liability to the algorithms, software, and robotic process?  It is commonly known that consumer products are tied to the liability factors of strict liability where companies are held responsible for their products malfunction. As humans are fallible, fallible humans design algorithms, software, robots, AI; hence, there is the possibility of the fallibility of an algorithm or software process.   Could there also be appropriate to borrow from the pharmaceutical field the term “unavoidably unsafe” product?[1] This application could be used if we have the circumstance of a product that is not free of issues.

Risks are always present.  Could “unavoidably unsafe” product doctrine be applied to AI? As practitioners, we assess risks and we acknowledge that certain products have risks.  AI, software design, the robotic process can be assessed but what results is that there will be discovered a number of beta testing and calibrations, that will make finding responsibility difficult because reasonable measures were taken.  Industry standard practice will be the setting.  In addition, if the reasonable notice was provided about the product’s risk there cannot be the supportable argument of failure to warn.  But more specific to AI, when one conceives of the numerous data inputted to the process of the product or software before being delivered, or the medical procedure being done, and the amount of testing and assessment before it is deployed, there could not be a saleable argument of failure to test.  Furthermore, consider the advisory about the need for frequent updates to address potential glitches, vulnerabilities, or detected malfunctions.  Who should be responsible for the updates and bear the responsibility for the harm if the update was not executed on a software monitoring a pacemaker or calibrating the diagnostics of a robot?

The challenges remain for attributing liability with the use of AI.   Data is not easy to get, especially reliable data that is specific to the need or service to be delivered.  The other challenge is timely and appropriate training and development because not all devices work in sync with one another. The search for a legal remedy for discerning liability where AI is the byproduct leading to the result that gave cause to the potential action continues. The trust of the patient, patron, and the consumer is contingent on results and the possibility of redress when humans relied on AI.

[1] Second, of Torts § 402A, (1965).

www.lorenzolawfirm.com/
copyright 2017

Monday, May 15, 2017

Data Protection Coexisting with Consumer Privacy


Data privacy concern is a prevailing topic.  This piece, is the first part of a series that we will share that will address what clients wrestle with regarding their delivery and administration of technology for their own clients and their day-to-day operations.  Technology will continue to influence our daily lives.  With some of the benefits comes challenges.  The benefits allow more freedoms and increase efficiencies for the user and consumer.  The challenges are present for both the consumer and the operator administrator of the technological innovation.  The questions gnaw. Can the data be kept and transmitted safely? Can the data be kept private as required?  With the nuanced articles on the subject, these questions are what it should be the gravamen.  Can data be kept safe and can it be kept private as it is administered daily in the delivery of services?

This also is about having trust in the development and the reliability of systems, networks, programs, hardware, and the personnel administering them all; and by the way, trust on the combination of technological solution and integration to deliver the needed safety and ultimate privacy.  As such, the trust then has value.  Value in trust is to both the individual expecting privacy and to the entity responsible for maintaining safety and the privacy as it is a custodian of the data, especially as the data is augmented daily with users use of services. 

The trust element in innovation and administration of the technology to maintain data safe and private increases in importance as reliance on technology increases in society.  To address that growing reliance there needs to be a basis, a standard, or a measure by which regulators and the courts can decide when that basis, standard, or measure was not met.  Of course, that’s for litigation hit.  But it is also for pointing to a goal on how data protection will coexist with consumer privacy, as it is expected, though it is unsaid.

The idea that is counseled to entities that endeavor in this realm, as it applies to the purveyor of technology, administrator of cloud systems, data aggregators, network designer, source code writer, is that privacy is the end game buttressed by being accountable.  The end game of privacy is in the data maintained, transmitted, accessed, and delivered to the correctly identified and intended recipient.  Accountability is the ingredient in pursuit of new ways and programs developed that innovate into that coexistence of data protection and consumer privacy.  Accountability in that process is essential to meet with the reliance of consumer trust and confidence. This accountability should be based on checks and balances in the approach for the development, application, and delivery of daily data services.  This accountability, as well must be an organizational task instilled in management and throughout the organization down the service line.  As client addresses this issue they realize it’s their decision-making process that needs addressing as they focus on privacy of the data they are administering or the result of the program designed.  Their approach needs to surpass the short-sighted view of regulatory compliance.

The experience demonstrates that as clients find themselves needing to address their decision-making regarding privacy, data protection, consumer expectations and trust, their accountability, they realize that their culture is changed.  Policies then are the result of that cultural accountability way of thinking that emphasizes privacy.  Privacy programs within organizations will be seen to require levels of cooperation.  The internal aspect of the change should drive the entity to instill data governance, risk assessment, and needs evaluation.  The approach should be one of data governance that syncs legal, operations, contract management, information technology, and human resources, to name a few, together for the levels to see eye-to-eye on data protection and its relationship to consumer privacy and the consumers' trust component.
The relationship of data protection to data privacy is usually seen in application as the technological and administrative existence of the means by which digital information is operatively used, shared, described, analyzed, etc.  That focal point is more than just administering who has access and can anyone else be able to decipher the data in transmission amid networks.  The challenges to the necessary coexistence of data protection and consumer privacy are not for academic theorizing or esoteric concepts.   They are hard and force honest considerations to the applications and efforts to achieve the privacy and protection desired and expected.  Other issues will be discussed on the next piece.

Originally published www.lorenzolawfirm.com March 8, 2017
Lorenzo Law Firm is “Working to Protect your Business, Ideas, and Property on the Web." Copyright 2017, all rights reserved Lorenzo Law Firm, P.A.

Thursday, May 11, 2017

Internet Tech Terms in Use


Internet tech terms are commonly mentioned in a law firm’s discussions where the practice focuses on issues in the market engaging the Internet, and among other aspects, e.g., computing, e-commerce, data protection, software development, application development, data management infrastructure, cloud processing, and cyber security, to mention as few.  As these terms arise in conversation, contracts reviewed or drafted, negotiations, legal advisory opinions, and in litigation, there is always the need to flesh out their meanings in the situation or business endeavor.  It is also frequently needed to describe their use and how they interplay in the aggregate of the whole system, for lack of a better word.  The word Internet remains the prevailing term to describe as parsimoniously as possible, the cyber realm in which all is being transmitted.  The following terms, are thrown around, in articles galore, assuming the readers’ common awareness. 

Our experience has been that the opposite is true. Many times, a client did not understand how their technology was stacked and was not aware of the difference between the client side of their system from the server side of their system.  The data gathered by clients becomes overwhelming and they are not aware of the available tools to cull, read, and manage the data productively.  They enquire about the cloud and how cloud computing could be an operative means for their business, but not appreciate the risks and the methods that can be applied.  So, for purposes of enhancing a general understanding going forward, this piece will touch briefly on frequent terms that will be helpful to businesses, entrepreneurs, startups, collaborators, and much more.  The following terms will be addressed in alphabetical order and not in rank of prevalence or importance.
Algorithm - Traditionally one thinks of math and formulas.  What is actually being employed are a set of rules that sets a procedure by which data is attended to in order to execute a process.  What that process is has no limits.  An algorithm can be used to cull data by designated criteria, i.e., meta.  It can be used for computer forensics involving the investigation for characters, words, images, phrases, and themes. An algorithm can be a set of rules that encrypt messages so that they are readable unless one has the key to encode the message. Calculations are possible for astronauts, missile delivery systems, orientation of satellites for defense or for communications, or for even to be able to watch an MLB game. 

Application Programming Interface - A simply way of describing how API is discussed with clients when working on contracts and terms of use agreements for their clients’ project is to address API as a form and means of communication among software that is being used.  The communication between software in and through devices may be via data patterns, certain criteria or variables, or queued calls.  Depending on the contract and work to be done by the client for their client, there could be a computer program that is to be developed that itself will be based on building blocks devised by the programmer using an API.  API may be for multiple purposes.  An API can be for managing a library of software to work together, for managing database or operating a system, and it can be functioned via the web.  The important aspect is understanding that there are instructions on how data is exchanged from software to software and how that data result is made usable. 

Artificial Intelligence - People normally think of a robot when AI is mentioned.  But it is not the physical appearance of an operation that should trigger attention.  Attention is warranted on the hidden aspect which is the process that drives AI, i.e., rules, algorithms, software, etc.  The rule that are utilized together create a process where either question can be answered, devices are made to begin operation, or even medical diagnosis can be drawn.  AI can learn from itself as inputs of results are noted and assessed form continuous improvement.  AI is just not all about robots, but could as well regard bots.  Bots can be terms the soft side of AI.  One can say that AI is a composition of processes driven by knowledge that can be continuously learned beyond what was initially programmed. The challenge for startups and established business and even for government institutions, is to how to learn the use of AI, and use it ethically.

Many uses are before us and many do not acknowledge AI’s existence.  Consider the following examples of AI in our interactions: electronic financial transactions; job search and queries and the matching results; journalism using story telling agents, such as the Washington Post Heliograf; Pinterest enhancing the recognition of images and improved searches; investment funds indexing, to mention a few. The key to keep in mind is that AI can learn from itself.

Big Data - The term is used to describe multiple sourced data accumulation and multiple purpose use data.  The former construes the immense obligation to manage what is received and the latter construes the ethics and means of how to use the data.  The data can be telling of many things such as customer choices, consumer trends, investment trends, allocation of taxes, misallocation of revenues, frequency of transactions, trends in transactions, inventory adjustments, supply chain trends among vendors, employee computer use, employee attendance and productivity, national grading trends, and the list of uses continues on; it seems endless.  What also makes it “big” is the multiple sources for the data.  An individual’s everyday life creates data.  Data is created by mobile use, web searches, buying gasoline or coffee, checking out a book at the local library, social media communications, forum memberships, newsletter interactions, and again, the list of sources continues.

The immensity of this data gave rise to the birth of Data as a Service well gave birth to “DaaS,” where the administrative handling of the data is outsourced due to cost constraints.  Big data is assessed by it velocity (accumulation), volume (amount), value (usefulness), and variety (interrelated and not stratified).  The volume is from captured data as it increases with web traffic and network processes.  The variety is by its very nature diverse.  Its value depends on relevance to the entity’s function and mission.  The velocity aspects is by the consequence of integrated recording of activities.  Data clients receiving the data run criteria for the use of the company that uses the data either for sales, product improvements, diagnostics, predictions, markets assessment, inventory allocations, etc.
Blockchain - Transactions and activity exchanges need to be recorded in order to be verified, measured, and accounted.  What blockchain does is that it records transactions between participants, client and business, members of a group, etc. it can be programmed to set transactions which can be used for automatic payments that are then reconciled in near real time.  Royalties are compensated as intellectual property is used.  Such uses can also help in the development of business processes to enhance efficiencies.  By serving as a ledger, transactions are confirmed and it could have multiple benefits, from adjusting inventories across the globe to tracking monetary flows of investments.  There is also the security aspect of the blocks in general.  These blocks of data cannot be altered and serve as a resource of verification with a time stamp.  The blockchain technology has as well a link between blocks that confirms verification.  It is essentially a software for integrated transaction recording.  Blcokchain’s use is beneficial for records management, medical field, banking, investments, money transfers, policing identity screening, processing transactions (ATMs. EFTs, ACH), and internal revenue tracking. 


Cloud Computing (CC) - The aspect of CC is a remote access feature to records, emails, data, that is not stored digitally by the entity on its servers.  The computing through the cloud so-to-speak is by virtue of accessing the information from distance.  Particular software enable that feature that as well provides the service benefit via the web.  The cloud computing function uses software as a services SaaS where the client entity logs in on their account using the rented software to access their records, emails, etc.  The idea is that data, records, documents, are stored in a central or diversified place and not at the business or entity’s place of operation.

Clients, then shift from owning infrastructure to renting and sharing resources that inevitably shares the administrative aspects of storage.  By it allowing for remote access from anywhere via a web connection and access, the resources are shared, that is servers, storage, applications (API), algorithms, and hardware.  The purpose is to minimize costs based on usage and based on a feature similar to the delivery of telecommunication systems, where the network is shared. 

Interface - On many occasions when discussions regard a service issue, the interface comes up to determine where there could have been a quality issue.  A user has experience with a head page that displays in a formatted feature the program to operate.  The application then is interfaced for the user to access the program.  What we tell clients is that is it the bridge between the user and the program.  So the user interface is about the instructions used to access and operate the application.  When the client issue regards multiple applications or devices, we discuss then the interface that facilitates the communication of coded instructions for the operating system and or the devices.  It is essentially the means of integration. 
Internet of ThingsClient are seeking to streamline their processes and enquire of the liabilities of IoT, but more so, they wonder about the process.  What takes place is convergence of systems by use of the Internet.  The technology that administers data interfaces with the operational technology whereby data is transmitted to either function the device, record data driven by the device or driven, using the device.  The IoT functions by recognizing the Internet Protocol of the device that then transmits the date through the network acknowledging the source of the data.  This is typical of heart monitors issuing a reading, alarm system controls receiving a call for monitoring update while away from home, wearable fit watch transmitting distance on a run, or cyber security detection of an intrusion attempt.  The IoT involves constant connection, transmission, and collection of data about uses, functions, and performance.  Where a client may be concerned with measuring a certain process, i.e., rate of fermentation on a new beer recipe, the sensors can transmit the data needed to measure.

Malware –  The occurrence of intrusions increasingly use malware where this type of software is coded to find vulnerabilities in a target system or network.  Gaining access is the initial goal of the malware, which then heightens the importance of personnel training to recognize suspicious emails and links.  There are a variety of malware types and they are commonly known as spyware to monitor and steal information, keylogger to trace character strokes on the keyboard, viruses to cease computing operations, worms seeking information inherent in the system, or ransomware seeking extort money by seizure of the computer operation.  The carrier of the malware could deceptively be embedded in advertising (adware).  Some malware attacks the operation system of the computer or network making the victim take certain predictable steps or forced steps that open the victim up to more harm.  The controversial issue for client is if they had proper cyber liability insurance that would cover the cyber incident.

Machine Learning – Our startup clients engaging in handling complex projects as a service, i.e., modeling, programming, email filtering, etc., utilize statistical data in a variety of ways.  The projects involve computational tasks that engender creating productive information for the intended recipient client.  Small business from collaboration centers may offer as a service analytics using machine learning that yield study results, model training, algorithms, diagnostics, predictions, and even security enhancing features.  Machine learning is tasks driven to study a data set or a compilation of entries and catalog them by instructed variables.  A client’s process using machine learning in their service provided could provide ways to filter emails for advertising, rank consumer choices, stratify demographics by region, or even make predictions on investments, interest rates, climate change, and effects of policies.
Open Source – In the growing sharing economy relying on collaboration, open source has been very beneficial.  The unseen source code can be shared and utilized to recreate an improved software. OS is a series of devised software that has made it mainstream to foster innovation and serve as a backbone for other generations of processes.  Once receiving a license to use an open source software, the user can modify the OS.  The key is that there will be the restriction, that while a modified OS is shared or made available to others, the source code must also be shared.  OS providers are focused on transparency and free exchange of code.  The transparency aspect allows for adaptations to the software code for others to inspect and maybe they can improve on it. By allowing access to the source code,

Technology Stack -  The combination of software components that addresses the operating system (OS), web server (Apache), database handling (MySQL), and the server coding environment (PHP), presents a typical stack.  In operations, contracts address the clients side of the service and the server side.  The Tech Stack is a categorization of the software that will be employed to run both the back-end and the front-end. It is the stack that comprises the architecture of the system for the business, entity, government agency, you name it. The backbone of the entity’s functioning mission or what is called business rules operate in the back-end of the system.  Consumers, customers, and the like, access the front-end by a web browser, or if the individual is access using a mobile, then the access is through an app interface.  While a system has its Tech Stack comprising it system software, applications are stacked apart.  As cloud issues arise with clients, servers and their programs become an issue regarding database programs and the software support they will receive within the stack components. 
Originally published www.lorenzolawfirm.com March 8, 2017
Lorenzo Law Firm is “Working to Protect your Business, Ideas, and Property on the Web." Copyright 2017, all rights reserved Lorenzo Law Firm, P.A.

Friday, March 10, 2017

Cybersecurity Rule Setting the Mark

Cybersecurity rule ideas, so far, have been piecemeal throughout the United States despite the numerous efforts.  Opposite to the European Union’s efforts through their General Data Protection Regulation (GDPR) initiative, in the U.S. we have no such thing.  We do have bolstering amendments to Gramm-Leach-Bliley Act, embodied in the Consumer Data Security and Notification Act of 2015 that seek to require financial institutions to notify of the data breach incident. While the term industries has expanded to encompass all entities that have handling operational responsibilities with consumer financial information, Congress responded to California’s promulgation of the California Notice of Security Breach Act, by itself proposing the Information Protection and Security Act.  The race is on to set provisions with teeth that cut through the obstacles in cybersecurity and data management and be responsive to consumer protection needs. 
Needless to say, companies have been required to address cybersecurity and the management of data, especially personal identifying information (PII).  There is also a growing concern with the occurrence of corporate spying and the impetus that led to the Spy Act, i.e., Securely Protect Yourself Against Cyber Trespass Act.  Though not a success, since 2011, initiatives have addressed legislative reforms to meet the concerns with information sharing, data management, cloud transfers, especially with the E.U. and the entities conducting business in the E.U.  But the matter of setting a cyber security regulation has now been placed center-stage by the State of New York. In a press release, New York’s Department of Financial Services (DFS), announced its Rule to “protect consumer data and financial systems from terrorist organizations and other criminal enterprises.” The rule took effect March 1, 2017.  The release noted that the provision “will require banks, insurance companies, and other financial services institutions regulated by DFS to establish and maintain a cybersecurity program designed to protect consumers and ensure the safety and soundness of New York State’s financial services industry.”   
The scope of its coverage hits all the points, including responsible connection along the lines of contracts by defining affiliates, penetration testing, persons, public available information, and as well the recurring monitoring obligation via risk assessments, authentications, and setting programs for advisory roles.  More so it provides for its scope over authorized users and covered entities. The "authorized user" is deemed to be an employee, contractors, or agent with authorized access to the information systems of the covered entity.  Its structure is labeled aptly with a girding focus on providing for a cybersecurity program, policy, chief information security officer, penetration testing, vulnerability assessments, results audits and screening, application security, personnel qualifications and clearances, vendor cybersecurity policies, and response plans. The requirements also delve into the encryption, multi-factor authentication, training, monitoring, notifications, post incident assessments, pre-incident security integrity audits and post-incident audits, and the expected implementation and enforcement.
While the rule takes effect, many entities will face compliance concerns with their policies and contracts.  The example being set by New York’s DFS will probably catch the eye of Washington and set an example for other states, especially as the EU gets closer to enforce its GDPR.  All concerns with cyber-attacks and cyber incidents are arising and it seems the lawmakers are seeing the need.  The general hope is that the initiative catches the  attention of managers and heads of covered entities and those in the fringes for the sake of cyber peace of mind and consumer protection at large. It may even wake up other states too.
Lorenzo Law Firm is “Working to Protect your Business, Ideas, and Property on the Web." Copyright 2018, all rights reserved Lorenzo Law Firm, P.A.  https://www.lorenzolawfirm.com 

Thursday, March 9, 2017

Internet of Things Security Claims

Internet of Things security claims have caught the attention of lawmakers and regulators. The Internet has been interesting to follow and work with as a realm of process and information exchange.  As the devices used to transmit information increase in our lives and work, protecting what is transmitted from unwanted eyes is not necessarily going in the same direction as the advancement of innovation.  With that concern is the Federal Trade Commission determining that standards are needed to address foreseen vulnerabilities.  These vulnerabilities were of concern when the FTC’s study focused on devices transmitting amid networks through the concept of the Internet of Things (IoT).
Since 2014, efforts to standardize measures to enhance cyber security were taking shape with Executive Order and the Cybersecurity Enhancement Act of 2014.  The emphasis was to perpetuate the work of the National Institute of Standards and Technology (NIST).  The FTC acknowledges the urgency with in which Web applications are being deployed to achieve tangible communication features for daily used devices.
Along with these concerns, the FTC saw fit to file a complaint against a device manufacturer of devices commonly used for Internet access and transmission.  The angle taken by the FTC regarding D-Link was one based on weaknesses on cyber security.  The claims were not based on actual consumer harm experienced by consumers, but rather on the security of cyber itself.  This complaint was addressing IoT devices, such as routers, cameras and their Internet Protocol.  The FTC also discussed the software that is implemented to achieve the desired transmission for devices to work as desired.  This approach also peered into consumers use of mobile apps in the transmission and delivery of communications.
Under the authority to address misrepresentation in business practices, the FTC seeks to determine of an entity misguided consumers into believing and trusting its representation, especially if the claims were of the cyber security nature, touting that measures were implemented to a level of prevention when they were not.   Section 5(a) of the FTC Act, provides authority consistent with this role and pursuit in the D-Link matter.  Claiming to implement security measure when the very commonly accepted measure was not, the FTC deems deceptive under its Act.  To aggravate the matter, if the measures that were not taken are the ones that are reasonable to implement, and that they are known in the industry to prevent, if implemented, unauthorized access, then the entity is failing to take reasonable precautions.  D-Link was considered to have deceptively led consumers to believe that security features were in place with its claims.
It is noteworthy that the issue of actual harm was not at the gravamen of the filing but rather the deceptive aspect of cyber security claims by the devise manufacturer.  This matter is telling for business.  If they advertise claiming security features, such claims better be backed up with reasonable measures to meet the claims.  The FTC takes seriously claims without supporting measures.  If the practices to ensure that the claims are met are indeed industry reasonable measures, the business in question will face a hurdle of credibility and reputation in the industry, not to mention the scrutiny of the FTC.
Advertising is key to business growth and brand development. Advertising, on the other hand, done with over statements and exaggerations and non-carried-out claims, is only asking for trouble.  Business should take care to address their policies, manuals, promotions, packaging, advertisements, with an honest involvement with the technical stakeholders of the business and management before publishing any security claims to the consumer public.  If carelessly crafted and promoted, materials published by a business will be seen as deceptive and will run counter to FTC guidelines which are intended to establish standards of practice, addressing considerations for Internet of Things devices.

Lorenzo Law Firm is “Working to Protect your Business, Ideas, and Property on the Web." Copyright 2017, all rights reserved Lorenzo Law Firm, P.A.

Sunday, February 26, 2017

Software Patent Filings Abstract Snags

Software patent filings have gone through snags during the approval process.  For many, not having a clearly stated specific enhancement to preexisting software was a liability to the filing's success.  Failing to satisfactorily describe a technical improvement on providing the innovation to the previous invents is another snag.  Not distinguishing the innovation from previous filings and from what appears to be a conventional purpose and function is as well critical to its success.  The struggle is with the abstraction of the descriptive process about the functionality of the intended software.  What gets lost is the detail necessary to demonstrate valued distinctions that set it apart as a new filing from preexisting related applications and functions in the field within which the software inventor seeks.
The Supreme Court in the Alice[1] case, iterated the standard for overcoming a Rule 12(b) failure to state a claim for patent eligibility.  As articulated. “a patent may be obtained for “any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof.” 35 U.S.C. § 101.”  Furthermore, the Court has held that “Laws of nature, natural phenomena, and abstract ideas are not patentable.” [2]  In Mayo, the Court stated that on the basis of patentability/validity determination that is determination that is independent of . . . any other statutory patentability provision.[3]  It is in Mayo, that the Court established its two-step process to assess patent filing that only provide for abstract ideas.
The first step was to determine if the filing’s claim point to a patent-ineligible concept. If that is established, then the next step would be to determine if the “the elements of each claim individually or in combination allows the filing to have the nature of the claim transform to a patent-eligible application.”[4]  As the Court filters through patent filings for software, the challenge is to discern what is truly novel and game changing on the software computing field and what is a routine process of imitation with different function, but still achieving the same without development and or improvement.
To this concern, the Court distills between software-related patent claiming an improvement to a process or system from those filings that are claiming language reciting an invention’s pinpoint discernable improvement to what has already been active in computing.  In its own discourse, the Court alludes to the close calls, i.e., “in other cases involving computer related claims, there may be close calls about how to characterize what the claims are directed to.”  That is, “some inventions’ basic thrust might more easily be understood as directed to an abstract idea, but under step two of the Alice analysis, it might become clear that the specific improvements in the recited computer technology go beyond “well-understood, routine, conventional activities” and render the invention patent-eligible.[5]
The Court in Bascom, stated that the patent filing claims to “filtering content is an abstract idea because it is a longstanding, well-known method of organizing human behavior, similar to concepts previously found to be abstract.”  But what was distinctive, was the order of description of the specific function for the individual claims apart from what was conventional among computers, Internet Service Providers, networks, and filtering. The analytical inquiry into a claim’s patent eligibility weighs on the specific description of the inventive concept claimed.
The Bascom Court further elaborated, “the claims do not merely recite the abstract idea of filtering content along with the requirement to perform it on the Internet, or to perform it on a set of generic computer components. Such claims would not contain an inventive concept.” “Filtering content on the Internet was already a known concept, and the patent describes how its particular arrangement of elements is a technical improvement over prior art ways of filtering such content.”   The specific location for the filtering system which was to be a remote ISP server, and allow the users to have the ability to adjust the filtering for their network accounts, distinguished it from the abstract concept of filtering in general.
Hence, a new way and an improvement was recognized as applicable to the claim being filed for a patent.  Description of steps of claims cannot solely be achieving a process by function but must key in on way is the distinguishing feature from the conventional understood prior existing application.  Or else, filing snags will continue for software patent filings seeking innovative ways to describe the same but with not so distinguishing improvements.
[1]Alice Corp. v. CLS Bank International, 134 S.Ct. 2347 (2014)
[2]Association for Molecular Pathology v. Myriad Genetics, Inc., 133 S. Ct. 2107, 2116 (2013) (quoting Mayo Collaborative Services. v. Prometheus Labs., Inc., 132 S. Ct. 1289, 1293 (2012).
[3] Mayo, 132, S. Ct. at 1303–04 (citing Bilski v. Kappos, 561 U.S. 593 (2010); Diamond v. Diehr, 450 U.S. 175 (1981), Parker v. Flook, 437 U.S. 584 (1978).
[4] Mayo, 132 S.Ct. at 1297.
[5] Bascom Global Internet Services v. AT&T Mobility, LLC, 827 F.3d 1341, 1350 (Fed. Cir. 2016).


Lorenzo Law Firm is “Working to Protect your Business, Ideas, and Property on the Web." Copyright 2017, all rights reserved Lorenzo Law Firm, P.A.