AI Governance

Published on Author alymurdhani

With the involvement of artificial intelligence and machine learning in nearly every sector, there will be lots of discussion on how governments will manage the implications and usages of the emerging technologies. Governments agencies around the globe have begun looking at the regulations needed to be put in place in order for both the individual agencies and industries to implement the technologies properly. These implications could include addressing the ethics of such use, liabilities of its decisions, impacts on jobs, and much more. This is an important step in the development of artificial intelligence as a whole in its implementation life cycle.

Military and EU AI Act

The United States government has proposed a set of guidelines to be followed by the military in their use of AI and automated systems, as covered by Matthew Gooding in an article for Tech Monitor. This was put out during a summit covering the topic in the Netherlands. The declaration was put out in an attempt to encourage US allies in the EU to follow along and lay out standards in which the use of such technologies in the defense space is done so in a responsible matter.

The Department of Defense actually invested $874 million in artificial intelligence technologies within its budget layout for 2022. This massive investment in the implementation of AI has many impacts in such a fresh space. This can come with a lot of different factors which could bring up a lot of questions about its ethics, including the idea that other countries will be developing such technologies as well. This is why the Department of Defense has put out this declaration, as the EU has been working to pass a very debated AI Act. This act would regulate the use of automated systems across Europe.

This act would be the first law on AI by a major regulator anywhere in the world, according the EU themselves. This law would assign AI implications into three risk categories, where applications would either be banned, highly regulated, or not regulated at all. This approach has its troubles, regarding what constitutes when an AI application becomes “high risk”. Chat GPT has been brought up in its current wide use and popularity, and according to the EU commissioner Chat GPT or generative AI would have provisions included in the Act in order for their uses to remain open for the public, but controlled in its certain use cases. Generative AI has the ability to write hate speech or fake news which could be problematic for many reasons, therefore this aspect would need to be looked at extensively before passing the law. There are many loopholes which would also allow the “low risk” category to be exploited, as these items would not be under any sort of governance. The AI Act is projected to be passed within the coming months, once many specific details are ironed out.

New York AI Governance Efforts

On a more centralized effort, the New York State Comptroller’s office did an audit of their development and use of AI tools and systems over the past three years. They decided to audit their own state’s office of Technology and Innovation in New York City in order to establish a proper governance structure over the continuous development and use of AI. New York City uses AI in different agencies including the NYPD, which uses AI powered facial recognition, and in the Administration for Children’s Services, using AI prediction for children facing future harm to prioritize cases.

The audit found that NYC does not have an adequate AI governance framework in place. There are only laws in place for agencies to report the use of AI, but no regulations on the use. The government would like for a central regulation and guidance on its use in order to ensure “that the City’s use of AI is transparent, accurate, and unbiased and avoids disparate impacts.” This audit was done in order to prevent any issues and potential liabilities that could arise from the irresponsible use of AI and is a great way to lead the charge for other governments to follow.

Potential Unemployment and Tax Methods

Another big hurdle to tackle in the political space is automation’s threat to jobs. US politicians are starting to bring up ways to deal with the potential unemployment issue that could come from the widespread implementation of AI and automation. Bernie Sanders is one of those politicians as he discussed this idea in a recent book he released. As covered by Tristan Bove of Fortune, Sanders discusses the idea of taxing companies who replace current workers with automation on a larger scale. Bill Gates has also been apart of the discussion to tax corporations who make these changes. “If workers are going to be replaced by robots, as will be the case in many industries, we’re going to need to adapt tax and regulatory policies to assure that the change does not simply become an excuse for race-to-the-bottom profiteering by multinational corporations,” as written by Sanders in his book, It’s OK to be Angry About Capitalism.

President Biden’s administration sees it more along the lines of replacing factory jobs with the fast-growing job market in the tech industry. This shift in skills is still many years in terms of adjustment and would still result in an unemployment problem from widescale job cuts. There would have to be much middle ground to form between fully automated tasks and involving human workers in the process. Taxing the companies out of doing such tasks may not be the outcome, as cost savings from much lower labor expenses may still have companies benefiting. There are many ways to look at this scenario, but we will have to wait and see if these projections actually even come true. This idea of widescale implementation to the point of an unemployment crisis, is still only a speculation of forecasting the technology and different industries.

Sources

US government proposes guidelines for responsible AI use by military

The US government has proposed a set of guidelines for the way artificial intelligence (AI) and automated systems should be used by the military, and says it hopes its allies will sign up to the proposals.

Home

There are several loopholes and exceptions in the proposed law. These shortcomings limit the Act’s ability to ensure that AI remains a force for good in your life. Currently, for example, facial recognition by the police is banned unless the images are captured with a delay or the technology is being used to find missing children.

Report: NYC government needs stronger artificial intelligence oversight

The state comptroller’s office is calling for stronger oversight of New York City government’s artificial intelligence programs, after an audit identified lapses that it says heighten risks of bias, inaccuracies and harm “for those who live, work or visit NYC.”

Artificial Intelligence Governance

Objective To assess New York City’s progress in establishing an appropriate governance structure over the development and use of artificial intelligence (AI) tools and systems. The audit covered the period from January 2019 through November 2022.

Bernie Sanders sides with Bill Gates and says he wants to tax the robot that takes your job

Automation that could eliminate countless jobs may be the next big political challenge in the U.S., and politicians are already starting to discuss how to deal with a potentially inevitable unemployment surge driven by robots and artificial intelligence. If automation and A.I.

8 Responses to AI Governance

  1. I understand that our military is extremely important to be properly funded for advancing our technology, but I find it interesting how the US completely underfunds other governmental agencies when it comes to improving technology. For example, the IRS can barely find people our age that can understand their current systems. Before we go and invest heavily into AI, I think we need to look at updating the extremely outdated systems and processes for governmental agencies.

  2. Great post. We dont’ really spend enough time talking about the governance issues with tech. There’s a really interesting TED talk from the founder of Uber (I think I’ve assigned in a few weeks) that talks about how governance really was an issue for their growth and that they essentially needed to break the law for the company to exist.

  3. Very interesting post Aly, this topic is something I didn’t think about when it comes to AI applications, but I believe its super serious because the government implementing AI is one of the riskiest sectors it could be in. It will be interesting to see how this topic grows and becomes even more contentious as technology improves.

  4. Hi Aly! This was a really interesting post. I have always been intrigued on how the effects of AI and using AI will be regulated. Like you stated in your blog post, there are definitely some loopholes. I feel that the line between highly regulated, banned, and not regulated could become blurry when it comes to AI. Companies/governments can just claim that they were using it for something else or something less severe. As for unemployment, I do have a feeling that AI and automation, in general, will easily replace those lower-class jobs, causing a MAJOR unemployment in the United States. It’s going to be interesting to see how regulation for AI plays out in the future.

  5. Hello Aly! This was an exciting post to read. I didn’t know that the Department of Defense made such a big investment in AI technologies.It will be interesting to see how these technologies change the future of our economy. As more and more jobs become automated, many people will worry if their jobs are next and this will be something that we will have to adapt to.

  6. This was really great to read through, I wanted to firstly offer props to your writing style and how entertaining and also informative it is at the same time. I also really enjoyed the pyramid diagram of the risks associated with AI in different functions, specifically just in that chatbots (I was thinking ChatGPT) are considered 2nd level risk but how the public opinion on them at times can seem a lot higher. I wonder what that could mean for the future of AI, such as will we actually see it expand into greater sectors where the risk increases or are we already at almost a cap on public opinion being iffy about it.

  7. Hey Aly! I loved reading this post because it was very attention grabbing. I did not know at all that The Department of Defense invested $874 million in artificial intelligence technologies within its budget during the year of 2022. I feel like this is such an amazing investment in our military, as in can help us in many ways during tough, unexpected times.

  8. This was such an interesting read, as it feels very real and close to home! I worry about the implementation of AI in our military and politics, though. While AI can be extremely useful in these scenarios, I worry about their lack of humanity when it comes to touchy issues like these. Also, I am very interested in reading more about AI taking over many jobs in the world, and I wonder how businesses and the government will respond to this. I hope it ends up being better overall for humanity.