World

New York state will monitor its use of AI after signing new bill into law

A bill signed into law last week will prevent New York state government agencies from replacing human workers with artificial intelligence software and will require agencies to conduct reviews and publish reports that detail how they're using AI.

Law prevents government agencies from cutting worker hours or replacing them with AI

A phone screen depicting the word ChatGPT in blue text on an orange background is seen held up in front of a computer screen with advertisements for ChatGPT.
OpenAI's ChatGPT app is displayed on an iPhone in New York, May 18, 2023. New York has signed a new bill into law that requires state agencies to regulate and report on their use of artificial intelligence, a move which comes as the technology is becoming increasingly popular across numerous sectors. (Richard Drew/The Associated Press)

New York state government agencies will have to conduct reviews and publish reports that detail how they're using artificial intelligence software, under a new law signed by Gov. Kathy Hochul.

Hochul, a Democrat, signed the bill last week after it was passed by state lawmakers earlier this year.

The law requires state agencies to perform assessments of any software that uses algorithms, computational models or AI techniques, and then submit those reviews to the governor and top legislative leaders along with posting them online.

It also bars the use of AI in certain situations, such as an automated decision on whether someone receives unemployment benefits or child-care assistance, unless the system is being consistently monitored by a human.

WATCH | Canada invests in Artificial Intelligence Safety Institute: 

Canada launches AI watchdog to oversee the technology’s safe development and use

7 months ago
Duration 1:50
Amid rapid global advances and deployment of artificial intelligence technologies, the federal government has invested millions to combine the minds of three existing institutes into one that can keep an eye on potential dangers ahead.

Law shields workers from limiting of hours due to AI

State workers would also be shielded from having their hours or job duties limited because of AI under the law, addressing a major concern that critics have raised against generative AI. 

State Sen. Kristen Gonzalez, a Democrat who sponsored the bill, called the law an important step in setting up some guardrails in how the emerging technology is used in state government.

Experts have long been calling for more regulation of generative AI as the technology becomes more widespread.

Some of the biggest concerns raised by critics, apart from job security, include security concerns around personal information, and that AI could amplify misinformation due to its propensity to invent facts, repeat false statements and its ability to create close to photo-realistic images based on prompts. 

Several other states have implemented laws regulating AI, or are poised to. In May, Colorado introduced the Colorado AI Act, which sets out requirements for developers to avoid bias and discrimination in high-risk AI systems that make substantial decisions, coming into effect in 2026. Numerous AI bills will also enter into force in the new year in California after being signed into law in September, including one requiring large online platforms to identify and block deceptive content related to elections, and another which requires developers to be open about the data sets used to train their systems. 

Canada has no federal regulatory framework for AI, although a proposed Artificial Intelligence and Data Act (AIDA) has been packaged with Bill C-27. It is still under consideration, with no timeline for whether or not it'll become law. Earlier this fall, the federal government also announced the launch of the Canadian Artificial Intelligence Safety Institute, which is intended to advance research on AI safety and responsible development. 

Alberta is working on developing its own regulations surrounding artificial intelligence, the privacy commissioner stated in March, specifically focusing on privacy issues such as deepfakes. 

With files from CBC News