Rising Global Regulation For Artificial Intelligence – Privacy Protection

Rising Global Regulation For Artificial Intelligence – Privacy Protection

To print this article, all you need is to be registered or log in to Mondaq.com.

INTRODUCTION

Across a wide range of industries, including advertising, banking, telecommunications, manufacturing, transportation, life sciences, waste management, defense and agriculture, the use of AI and interest in its diverse applications is steadily increasing. Businesses are turning to AI systems, and the related technology of machine learning, to increase their revenue, quality and speed of production or services, or to lower operating costs by automating and optimizing processes previously reserved for human labor. Government and industry leaders now frequently talk about the need to adopt AI, maintain a “strategic edge” in AI innovation capabilities, and ensure that AI is used in correct or humane ways.

Yet the recent surge of interest in AI sometimes obscures the fact that it is not governed by any single general body of “AI laws”—or even an agreed-upon definition of what AI is or how it should be used or regulated. With applications as diverse as chatbots, facial recognition, digital assistants, intelligent robotics, autonomous vehicles, medical image analysis and precision planting, AI defies easy definition, and can implicate areas of law that largely developed before AI became commonplace. Because it is an intangible process that requires technical expertise to design and operate, AI can seem mysterious and beyond the grasp of ordinary people. Indeed, most lawyers or business leaders will never personally train or deploy an AI algorithm—although they are increasingly called upon to negotiate or litigate AI-related issues.

This White paper seeks to demystify AI for non-technical readers, and reviews the core legal concepts that governments in several key jurisdictions – the European Union, China, Japan and the United States – are developing in their efforts to regulate AI and its responsible development and use to encourage. While AI legal issues facing companies will often be specific to specific products, transactions and jurisdictions, this White Paper also includes a checklist of key questions that in-house counsel may want to address when advising on the development, use, deployment or licensing of AI, either within a company or in the transactional context. Ultimately, governments implement diverse and sometimes conflicting requirements. This scenario, which requires patient review and a strategic perspective by regulated parties, rewards an ability to explain technical products to regulators in clear, nontechnical terms.

WHAT IS AI?

AI consists of complex mathematical processes that form the basis of algorithms and software techniques for knowledge representation, logical processes and inference. One core technology behind AI is machine learning, in which AI models can be trained to learn from a large amount of data to draw correlations and patterns so that such models can be used, for example, in processing and making autonomous decisions.

The key to every AI is its “objective function”—the goal or goals its developers designed it to achieve. This objective function can vary widely – from identifying molecules with likely antibiotic properties, to predicting where and when inputs will be needed in a transport or manufacturing system, to spotting potential safety or security threats, to generating text, sound or images that meet certain specifications. To learn to achieve this objective function, AI models can be trained using large data sets – with varying degrees of human supervision and feedback – to learn to identify and make predictions based on patterns, similarities and fundamental characteristics, including those that people may never have. conceptualized or observed. The AI ​​is then asked to apply the model it has honed during training to a real situation, where it performs its task. This latter activity is often referred to as “distraction”.

ARTIFICIAL INTELLIGENCE (AI) COM

1263440a.jpg

AI components typically consist of data (both training data for training and raw data for inference) and software processes to execute complex algorithms.

When trained and applied correctly, AI-based technology can unlock tremendous gains in productivity – enabling results or insights that would otherwise take prohibitively long periods of time to achieve through human reason alone, or even by humans using traditional computing techniques . In some cases, AI can be applied to replace or supplement “rote” tasks that a person would otherwise perform much more slowly. In other cases, AI can generate text (including computer code, or answers to basic customer inquiries), sound or images (including aspects of architectural or mechanical designs) that either replace the need for human input or serve as a first draft for human review. Often a human mind, informed by AI input, analysis and recommendations, can more quickly home in on a key set of options (pharmaceutical, strategic, etc.) that warrant closer study.

In many industries, the integration of AI-based technology is seen as the key to ensuring long-term competitiveness. Most industrialized countries have already started the race for global market leadership in AI technology through various means such as public funding. In addition, governments seek to support AI’s growth through a legislative framework that allows the technology to develop and optimize its potential.

However, as many governments and analysts have noted, the benefits of AI systems can also come with risks. For example, AI can contribute to the creation of “echo chambers” that display content based only on a user’s past online behavior, thereby amplifying their views and interests or exploiting their vulnerabilities. AI applications are also increasingly used in objects that regularly interact with humans, and may even be integrated into the human body, which may pose safety and security risks.

Governments seeking to regulate AI aim to build citizen trust in such technology while limiting potentially harmful applications. Yet different governments, and different agencies within the same government, sometimes have different concepts of what constitutes an appropriate way of training and applying AI. What one authority sees as a feature, another may see as a flaw. Furthermore, they – and regulated publics – may differ on the ideal relative weight to place on key considerations such as privacy, transparency, freedom and security. As governments apply diverse perspectives to this technically complex (and often inherently multijurisdictional) area, regulated parties face a complex, sometimes conflicting body of regulatory considerations that is itself rapidly changing. Training, deployment, marketing, use and licensing of AI, especially if these activities occur across multiple jurisdictions, increasingly require a multidisciplinary and multijurisdictional legal perspective.

HOW IS AI GOVERNED?

While many laws already apply to AI, ranging from IP protection to competition law and privacy, AI’s rapid expansion has alerted lawmakers worldwide, leading to updating legal and regulatory frameworks and, in some cases, creating of completely new ones. These global legal initiatives generally aim to address three main categories of issues:

  • First, legislation and regulations aim to promote AI deployment by creating a vibrant and secure data ecosystem. Data is required to train and build the algorithmic models embedded in AI, as well as to apply the AI ​​systems for their intended use. In the European Union, AI’s hunger for data is partly regulated by the well-known GDPR; in addition, a proposed Data Act facilitating data access and sharing is underway. In comparison, the United States has taken a more decentralized approach to the development and regulation of AI-based technologies and the personal data that underpin them. Federal regulatory frameworks—often solely in the form of nonbinding guidance—were issued on an agency-by-agency and subject-by-subject basis, and authorities sometimes clarified their standards in the course of Congressional hearings or agency investigations rather than through by means of clearly proscriptive published rules. The People’s Republic of China, in turn, has expanded its data security and protection laws, with a particular emphasis on preventing the unauthorized export of data. While the central government promulgates generally applicable laws and regulations, specialized government agencies have provided regulations specific to their respective fields, and local governments are exploring more efficient but secure ways to share or trade data in their areas, such as setting up data exchange centers.

  • Second, regulators in various jurisdictions have proposed or imposed restrictions on certain AI systems or uses that have been judged to raise safety and human rights concerns. Targets for such restrictions include AI robots capable of taking lethal action without a meaningful opportunity for human intervention, or AI social or financial credit scoring systems that pose unacceptable risks of racial or socioeconomic discrimination. In the European Union, the sale or use of AI applications may become subject to uniform conditions (e.g. standardization or market authorization procedures). For example, the proposed EU AI law aims to prohibit market access for high-risk AI systems, such as AI systems intended for the “real-time” and “post” remote biometric identification of natural persons. Members of Congress in the United States have advanced legislation that tackles certain aspects of AI technology, but in a more piecemeal, issue-focused way. For example, recently passed legislation aims to curb the effect of certain applications of generative adversarial networks that can produce convincing synthetic similarities of individuals (or “deep fakes”) on US cybersecurity and election security. The PRC and Japan have not yet issued mandatory laws or regulations limiting the application of AI in any specific area for concerns such as discrimination or privacy. But similar to the United States, China regulates several aspects important to the realization and development of AI, such as data security, protection of personal information, and automation, among others.

  • Third, governments have only begun to update traditional liability frameworks, which are not always deemed adequate to adequately deal with harm allegedly caused by AI systems due to the variety of actors involved in the development, interconnectivity and complexity of such systems. New liability frameworks are therefore being considered, such as establishing strict liability for producers of AI systems, to facilitate claims for consumer harm. The first comprehensive proposal comes from the European Union’s new draft liability rules for AI systems, which aim to facilitate access to redress for alleged “victims of AI” through easier access to evidence, presumption of causation and reversal of the burden of proof .

Each of these will be discussed further in the following sections

To view the full article, click here

The content of this article is intended to provide a general guide to the topic. Specialist advice should be sought regarding your particular circumstances.

Leave a Reply

Your email address will not be published. Required fields are marked *