• Home
  • AI News
  • Bookmarks
  • Contact US
Reading: The White House Already Knows How to Make AI Safer
Share
Notification
Aa
  • Inspiration
  • Thinking
  • Learning
  • Attitude
  • Creative Insight
  • Innovation
Search
  • Home
  • Categories
    • Creative Insight
    • Thinking
    • Innovation
    • Inspiration
    • Learning
  • Bookmarks
    • My Bookmarks
  • More Foxiz
    • Blog Index
    • Sitemap
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
> Blog > AI News > The White House Already Knows How to Make AI Safer
AI News

The White House Already Knows How to Make AI Safer

admin
Last updated: 2023/07/25 at 4:12 PM
admin
Share
5 Min Read

Second, it could instruct any federal agency procuring an AI system that has the potential to “meaningfully impact [our] rights, opportunities, or access to critical resources or services” to require that the system comply with these practices and that vendors provide evidence of this compliance. This recognizes the federal government’s power as a customer to shape business practices. After all, it is the biggest employer in the country and could use its buying power to dictate best practices for the algorithms that are used to, for instance, screen and select candidates for jobs.

Third, the executive order could demand that anyone taking federal dollars (including state and local entities) ensure that the AI systems they use comply with these practices. This recognizes the important role of federal investment in states and localities. For example, AI has been implicated in many components of the criminal justice system, including predictive policing, surveillance, pre-trial incarceration, sentencing, and parole. Although most law enforcement practices are local, the Department of Justice offers federal grants to state and local law enforcement and could attach conditions to these funds stipulating how to use the technology.

Finally, this executive order could direct agencies with regulatory authority to update and expand their rulemaking to processes within their jurisdiction that include AI. Some initial efforts to regulate entities using AI with medical devices, hiring algorithms, and credit scoring are already underway, and these initiatives could be further expanded. Worker surveillance and property valuation systems are just two examples of areas that would benefit from this kind of regulatory action.

Of course, the testing and monitoring regime for AI systems that I’ve outlined here is likely to provoke a range of concerns. Some may argue, for example, that other countries will overtake us if we slow down to implement such guardrails. But other countries are busy passing their own laws that place extensive restrictions on AI systems, and any American businesses seeking to operate in these countries will have to comply with their rules. The EU is about to pass an expansive AI Act that includes many of the provisions I described above, and even China is placing limits on commercially deployed AI systems that go far beyond what we are currently willing to consider.

- Advertisement -
Ad imageAd image

Others may express concern that this expansive set of requirements might be hard for a small business to comply with. This could be addressed by linking the requirements to the degree of impact: A piece of software that can affect the livelihoods of millions should be thoroughly vetted, regardless of how big or how small the developer is. An AI system that individuals use for recreational purposes shouldn’t be subject to the same strictures and restrictions.

There are also likely to be concerns about whether these requirements are practical. Here again, it’s important not to underestimate the federal government’s power as a market maker. An executive order that calls for testing and validation frameworks will provide incentives for businesses that want to translate best practices into viable commercial testing regimes. The responsible AI sector is already filling with firms that provide algorithmic auditing and evaluation services, industry consortia that issue detailed guidelines vendors are expected to comply with, and large consulting firms that offer guidance to their clients. And nonprofit, independent entities like Data and Society (disclaimer: I sit on their board) have set up entire labs to develop tools that assess how AI systems will affect different populations.

We’ve done the research, we’ve built the systems, and we’ve identified the harms. There are established ways to make sure that the technology we build and deploy can benefit all of us while reducing harms for those who are already buffeted by a deeply unequal society. The time for studying is over—now the White House needs to issue an executive order and take action.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.

admin Juli 25, 2023 Juli 25, 2023
Share this Article
Facebook Twitter Email Copy Link Print
Leave a comment Leave a comment

Schreibe einen Kommentar Antworten abbrechen

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Follow US

Find US on Social Medias
Facebook Like
Twitter Follow
Youtube Subscribe
Telegram Follow
newsletter featurednewsletter featured

Subscribe Newsletter

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form]

Popular News

Chromebooks Will Get Gemini and New Google AI Features
Mai 28, 2024
What is Argmax in Machine Learning?
Mai 31, 2022
Here’s the Thing AI Just Can’t Do
Februar 9, 2024
What is a Digital Worker? How Do they Improve Automation?
September 6, 2022

Quick Links

  • Home
  • AI News
  • My Bookmarks
  • Privacy Policy
  • Contact
Facebook Like
Twitter Follow

© All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?