Friday, June 9, 2017

Artificial Intelligence Liability

Liability, as an issue, seldom arises in common conversations.   When discussions in the work place occur, liability is not on the top of the list of issues.  Yet, there are a plethora of law firm ads about personal injury claims, insurance commercials, and medical malpractice issues.  From watching and reading ads you are left with the opinion that injury claims and liability are all too common.  Coupled with this prevalence of personal injury claims and medical claims is the novel aspect of technological innovation that is utilized in the medical profession and in the delivery of services in many industries that include data management, cloud computing, software design, and data analysis.  What if something goes wrong? What if the conclusion leading to the delivery of service was incorrect?  What if the data was not categorized or coded accurately leading to a data breach? It is reasonable to wonder if now a new vein of ads and claims will arise as artificial intelligence (AI) is increasingly incorporated in the delivery of many types of services.  Can you fathom a robot conducting surgery on your spleen or knee cartilage?  Well to the amazement of many, those pacemakers are run by codes that monitors and assesses and provides feedback on your heart.  The data derived can be used to suggest replacement treatment or medication. Diagnostics are run by a system of culled data that result in the predictive assessment of the best-concluded treatment, medicine, or procedure. The benefits are increasing with every step of innovation. Yet, there is always room for error, including diagnostic error, procedural error, prescription error, data mismanaged or incorrectly transferred.  Many challenges remain in assessing liability with the use of AI.

With all this innovation and possibilities, how do we regard responsibility and how do we weigh liabilities?  How do we assess risk and balance with what can be insured?  The use of machine learning through the execution of algorithmic formulas lends to some difficulty.  It is difficult to open a formula and have it dissect to determine what led to an incident.   We know that the result is drawn by inputs.  The inputs are drawn from data that is culled, categorized, and identified as relevant on a scale, so-to-speak.  Algorithms are not reviewed, though their results could benefit millions or hinder one skin cancer patient, an airplane pilot or the assets of a Fintech firm’s portfolio.  The barrier in algorithms is their proprietary trappings.   Algorithms and their design are considered proprietary; as such, they are not open for scrutiny or evaluation, but for by their own designer or designer team.  But the designer could very well be a bot.  That bot is as well processing based on inputted data selected by someone.  The complication to discerning liability is becoming clearer.

Could AI be the turning point where liability will be reduced for doctors, data managers, cloud service operators, pharmaceutical companies, medical researchers sued by investors? With all the potential benefits of AI in the delivery of a multitude of services over a span of industries, how is responsibility reconciled?  Should their be liability?  The advance of technology in AI has brought diagnosis on the spot, efficiencies in production, efficiencies in the allocation of resources, made medical services more specific to the person and made cars more responsive.  Where liability is triggered is when the human element factors in.  Can we sue a robot or its designer?  After all, doctors are expected to assess their use of AI for their delivery of services and their diagnosis.  If an automobile or train malfunctions, according to the TV ads, we can sue the manufacturer or even the manufacturer of the component used in the car.  We can sue a pesticide manufacturer for failing to notice the general public of the risk of their product and for failure to provide instruction on uses and protective measures to take when using the product.  Could the same be applied to AI, algorithms, and software operating robots designers?  The answer is not that simple because it is not that easy to find the source.

We are left then with the involvement of machine learning determining the future product, result, conclusion, process, etc., of what consumers, patients, and patrons, receive.   AI, software, and robots are not designed in a vacuum.  They take years and numerous participants and many beta assessments.  To align liability to the designer, then, take your pick.  To align liability to the company owning the software or robot, then again, take your pick among the many involved in the development.  The downside of this exercise is that if the researchers, programmers, code writers are placed in question and held subject to liability, innovation will be stifled.  Such innovation is exponentially growing in influence in every field you can imagine.  But the benefits are strengthened by how we discern responsibility for the trust in the airplane's flight trajectory, in the surgical procedure and specific location of the cancer, in the industry data leading to shifts in market investments, and in the composition of a particular ingredient in a pesticide.

Moreover, could there be an argument for applying strict liability to the algorithms, software, and robotic process?  It is commonly known that consumer products are tied to the liability factors of strict liability where companies are held responsible for their products malfunction. As humans are fallible, fallible humans design algorithms, software, robots, AI; hence, there is the possibility of the fallibility of an algorithm or software process.   Could there also be appropriate to borrow from the pharmaceutical field the term “unavoidably unsafe” product?[1] This application could be used if we have the circumstance of a product that is not free of issues.

Risks are always present.  Could “unavoidably unsafe” product doctrine be applied to AI? As practitioners, we assess risks and we acknowledge that certain products have risks.  AI, software design, the robotic process can be assessed but what results is that there will be discovered a number of beta testing and calibrations, that will make finding responsibility difficult because reasonable measures were taken.  Industry standard practice will be the setting.  In addition, if the reasonable notice was provided about the product’s risk there cannot be the supportable argument of failure to warn.  But more specific to AI, when one conceives of the numerous data inputted to the process of the product or software before being delivered, or the medical procedure being done, and the amount of testing and assessment before it is deployed, there could not be a saleable argument of failure to test.  Furthermore, consider the advisory about the need for frequent updates to address potential glitches, vulnerabilities, or detected malfunctions.  Who should be responsible for the updates and bear the responsibility for the harm if the update was not executed on a software monitoring a pacemaker or calibrating the diagnostics of a robot?

The challenges remain for attributing liability with the use of AI.   Data is not easy to get, especially reliable data that is specific to the need or service to be delivered.  The other challenge is timely and appropriate training and development because not all devices work in sync with one another. The search for a legal remedy for discerning liability where AI is the byproduct leading to the result that gave cause to the potential action continues. The trust of the patient, patron, and the consumer is contingent on results and the possibility of redress when humans relied on AI.

[1] Second, of Torts § 402A, (1965).
copyright 2017

No comments:

Post a Comment