US Code
CHAPTER 7— NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY
§ 278h–1. Standards for artificial intelligence

(a) MissionThe Institute shall—(1) advance collaborative frameworks, standards, guidelines, and associated methods and techniques for artificial intelligence;
(2) support the development of a risk-mitigation framework for deploying artificial intelligence systems;
(3) support the development of technical standards and guidelines that promote trustworthy artificial intelligence systems; and
(4) support the development of technical standards and guidelines by which to test for bias in artificial intelligence training data and applications.
(b) Supporting activitiesThe Director of the National Institute of Standards and Technology may—(1) support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems, which may include—(A) privacy and security, including for datasets used to train or test artificial intelligence systems and software and hardware used in artificial intelligence systems;
(B) advanced computer chips and hardware designed for artificial intelligence systems;
(C) data management and techniques to increase the usability of data, including strategies to systematically clean, label, and standardize data into forms useful for training artificial intelligence systems and the use of common, open licenses;
(D) safety and robustness of artificial intelligence systems, including assurance, verification, validation, security, control, and the ability for artificial intelligence systems to withstand unexpected inputs and adversarial attacks;
(E) auditing mechanisms and benchmarks for accuracy, transparency, verifiability, and safety assurance for artificial intelligence systems;
(F) applications of machine learning and artificial intelligence systems to improve other scientific fields and engineering;
(G) model documentation, including performance metrics and constraints, measures of fairness, training and testing processes, and results;
(H) system documentation, including connections and dependences within and between systems, and complications that may arise from such connections; and
(I) all other areas deemed by the Director to be critical to the development and deployment of trustworthy artificial intelligence;
(2) produce curated, standardized, representative, high-value, secure, aggregate, and privacy protected data sets for artificial intelligence research, development, and use;
(3) support one or more institutes as described in section 9431(b) of this title for the purpose of advancing measurement science, voluntary consensus standards, and guidelines for trustworthy artificial intelligence systems;
(4) support and strategically engage in the development of voluntary consensus standards, including international standards, through open, transparent, and consensus-based processes; and
(5) enter into and perform such contracts, including cooperative research and development arrangements and grants and cooperative agreements or other transactions, as may be necessary in the conduct of the work of the National Institute of Standards and Technology and on such terms as the Director considers appropriate, in furtherance of the purposes of this division.11 See References in Text note below.
(c) Risk management frameworkNot later than 2 years after January 1, 2021, the Director shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for trustworthy artificial intelligence systems. The framework shall—(1) identify and provide standards, guidelines, best practices, methodologies, procedures and processes for—(A) developing trustworthy artificial intelligence systems;
(B) assessing the trustworthiness of artificial intelligence systems; and
(C) mitigating risks from artificial intelligence systems;
(2) establish common definitions and characterizations for aspects of trustworthiness, including explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, verification, interpretability, and other properties related to artificial intelligence systems that are common across all sectors;
(3) provide case studies of framework implementation;
(4) align with international standards, as appropriate;
(5) incorporate voluntary consensus standards and industry best practices; and
(6) not prescribe or otherwise require the use of specific information or communications technology products or services.
(d) Participation in standard setting organizations(1) RequirementThe Institute shall participate in the development of standards and specifications for artificial intelligence.
(2) PurposeThe purpose of this participation shall be to ensure—(A) that standards promote artificial intelligence systems that are trustworthy; and
(B) that standards relating to artificial intelligence reflect the state of technology and are fit-for-purpose and developed in transparent and consensus-based processes that are open to all stakeholders.
(e) Data sharing best practicesNot later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies, including options for partnership models between government entities, industry, universities, and nonprofits that incentivize each party to share the data they collected.
(f) Best practices for documentation of data setsNot later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop best practices for datasets used to train artificial intelligence systems, including—(1) standards for metadata that describe the properties of datasets, including—(A) the origins of the data;
(B) the intent behind the creation of the data;
(C) authorized uses of the data;
(D) descriptive characteristics of the data, including what populations are included and excluded from the datasets; and
(E) any other properties as determined by the Director; and
(2) standards for privacy and security of datasets with human characteristics.
(g) TestbedsIn coordination with other Federal agencies as appropriate, the private sector, and institutions of higher education (as such term is defined in section 1001 of title 20), the Director may establish testbeds, including in virtual environments, to support the development of robust and trustworthy artificial intelligence and machine learning systems, including testbeds that examine the vulnerabilities and conditions that may lead to failure in, malfunction of, or attacks on such systems.
(h) Authorization of appropriationsThere are authorized to be appropriated to the National Institute of Standards and Technology to carry out this section—(1) $64,000,000 for fiscal year 2021;
(2) $70,400,000 for fiscal year 2022;
(3) $77,440,000 for fiscal year 2023;
(4) $85,180,000 for fiscal year 2024; and
(5) $93,700,000 for fiscal year 2025.

Structure US Code

US Code

Title 15— COMMERCE AND TRADE

CHAPTER 7— NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY

§ 271. Findings and purposes

§ 272. Establishment, functions, and activities

§ 272a. Technology services

§ 272b. Annual budget submission

§ 273. Functions; for whom exercised

§ 273a. Under Secretary of Commerce for Standards and Technology

§ 274. Director; powers and duties; report; compensation

§ 275. Hiring critical technical experts

§ 275a. Service charges

§ 275b. Charges for activities performed for other agencies

§ 275c. Cost recovery authority

§ 276. Ownership of facilities

§ 277. Regulations

§ 278. Visiting Committee on Advanced Technology

§ 278a. Repealed. , ,

§ 278b. Working Capital Fund

§ 278c. Acquisition of land for field sites

§ 278d. Construction and improvement of buildings and facilities

§ 278e. Functions and activities

§ 278f. Fire Research Center

§ 278g. International activities

§ 278g–1. Education and outreach

§ 278g–2. Post-doctoral fellowship program

§ 278g–2a. Teacher science and technology enhancement Institute program

§ 278g–3. Computer standards program

§ 278g–3a. Definitions

§ 278g–3b. Security standards and guidelines for agencies on use and management of Internet of Things devices

§ 278g–3c. Guidelines on the disclosure process for security vulnerabilities relating to information systems, including Internet of Things devices

§ 278g–3d. Implementation of coordinated disclosure of security vulnerabilities relating to agency information systems, including Internet of Things devices

§ 278g–3e. Contractor compliance with coordinated disclosure of security vulnerabilities relating to agency Internet of Things devices

§ 278g–4. Information Security and Privacy Advisory Board

§ 278g–5. Enterprise integration initiative

§ 278h. Research program on security of computer systems

§ 278h–1. Standards for artificial intelligence

§ 278i. Reports to Congress

§ 278j. Studies by National Research Council

§ 278k. Hollings Manufacturing Extension Partnership

§ 278k–1. Competitive awards program

§ 278k–2. Expansion awards pilot program

§ 278l. Assistance to State technology programs

§ 278m. Repealed. , ,

§ 278n. Repealed. , ,

§ 278n–1. Emergency communication and tracking technologies research initiative

§ 278n–2. Green manufacturing and construction

§ 278o. User fees

§ 278p. Notice to Congress

§ 278q. Appropriations; availability

§ 278r. Collaborative manufacturing research pilot grants

§ 278s. Manufacturing USA

§ 278t. Advanced communications research activities

§ 279. Absence of Director

§§ 280, 281. Repealed. , ,

§ 281a. Structural failures

§ 282. Repealed. , ,

§ 282a. Assessment of emerging technologies requiring research in metrology

§ 283. Repealed. , , , 656

§ 284. Omitted

§§ 285, 286. Repealed. , ,