WASHINGTON – August 9, 2021 (Investorideas.com Newswire) Businesses are increasingly adopting artificial intelligence tools to support workforce decisions in areas such as hiring and retaining high-performing employees. But to successfully deploy and maximize the productivity benefits of these AI tools, employers should address potential concerns to ensure the technology does not exacerbate biases or inequalities, produces fair and accurate results, and does not unduly compromise worker privacy. The government’s role in this process should be to encourage AI adoption and establish guardrails to limit harms, not to impose precautionary regulations that inhibit innovation, according to a new report from the Center for Data Innovation.
“The dominant narrative around AI is one of fear, so policymakers need to actively support the technology’s growth,” said Hodan Omaar, policy analyst at the Center for Data Innovation and author of the report. “It is critical for lawmakers to avoid intervening in ways that are ineffective, counterproductive, or harmful to innovation.”
The Center’s report describes how precautionary regulations–ranging from bans on specific types of technologies to opt-in requirements–impose unnecessary costs, limit innovation, and slow adoption. To more effectively encourage responsible use of AI for workforce decisions, the report enumerates eight policy principles:
1. Make government an early adopter of AI for workforce decisions and share best practices: National, subnational, and local governments should promote broad adoption of AI in the workforce, reducing the risks associated with AI and encouraging others to adopt and invest in the technology.
2. Ensure data protection laws support the adoption of AI for workforce decisions: Governments should ensure data protection laws align with their AI goals by reducing unnecessary regulatory costs and avoiding undermining important data uses.
3. Ensure employment nondiscrimination laws apply regardless of whether an organization uses AI: Regulators should review and clarify how existing laws apply to AI solutions to ensure employers comply with these AI laws.
4. Create rules to safeguard against new privacy risks in workforce data: Policymakers should create data privacy legislation to generally allow employers to collect and use biometric data, encouraging innovation in the use of AI for the workforce while also restricting certain potentially invasive uses without consent.
5. Address concerns about AI systems for workforce decisions at the national level: AI tools must abide by broad state data protection laws, creating unnecessary and unreasonable compliance costs for businesses and threatening the viability of the national market for AI tools. Policymakers should address policy questions at the national level through comprehensive federal data protection legislation that preempts state data laws.
6. Enable the global free flow of employee data: Countries should hold employers accountable for managing the data they collect, regardless of where they store or process it.
7. Not regulate the input of AI systems used for workforce decisions: Countries that wish to see the rapid growth of AI for workforce decisions and ensure that these systems have sufficient, representative data to perform accurately should avoid regulating the data sources these AI systems use.
8. Focus regulation on employers, not AI vendors: Employers are best suited to ensure that the AI systems they use operate as intended and identify and rectify harmful outcomes, as employers make the most important decisions about how their systems impact workers, not vendors.
“AI can help businesses recruit and hire employees faster while also retaining valued employees and ensuring fair compensation,” said Omaar. “Because the overwhelming majority of AI applications for making decisions about the workforce benefit the economy, businesses, and workers, governments should encourage responsible adoption and use of this technology.”
The Center for Data Innovation is the leading global think tank studying the intersection of data, technology, and public policy. With staff in Washington, D.C. and Brussels, the Center formulates and promotes pragmatic public policies designed to maximize the benefits of data-driven innovation in the public and private sectors. It educates policymakers and the public about the opportunities and challenges associated with data, as well as technology trends such as open data, artificial intelligence, and the Internet of Things. The Center is a part of the nonprofit, nonpartisan Information Technology and Innovation Foundation. For more about the Center, visit datainnovation.org.
This news is published on the Investorideas.com Newswire – a global digital news source for investors and business leaders
Disclaimer/Disclosure: Investorideas.com is a digital publisher of third party sourced news, articles and equity research as well as creates original content, including video, interviews and articles. Original content created by investorideas is protected by copyright laws other than syndication rights. Our site does not make recommendations for purchases or sale of stocks, services or products. Nothing on our sites should be construed as an offer or solicitation to buy or sell products or securities. All investing involves risk and possible losses. This site is currently compensated for news publication and distribution, social media and marketing, content creation and more. Disclosure is posted for each compensated news release, content published /created if required but otherwise the news was not compensated for and was published for the sole interest of our readers and followers. Contact management and IR of each company directly regarding specific questions.
More disclaimer info: https://www.investorideas.com/About/Disclaimer.asp Learn more about publishing your news release and our other news services on the Investorideas.com newswire https://www.investorideas.com/News-Upload/ and tickertagstocknews.com