Although fiscal policy has a major role to play in supporting a more equal distribution of gains and opportunities from new generative- Artificial Intelligence (AI) technologies, this will require significant upgrades to social-protection and tax systems around the world, the International Monetary Fund (IMF) has said.
The Fund, which stated this in a blog on Monday, noted that lessons from past automation waves and its modeling, suggest more generous unemployment insurance could cushion the negative impact of AI on workers, thereby allowing displaced workers to find jobs that better match their skills.
According to the IMF, “most countries have considerable scope to broaden the coverage and generosity of unemployment insurance, improve portability of entitlements, and consider forms of wage insurance. “At the same time, sector-based training, apprenticeships, and upskilling and reskilling programs could play a greater role in preparing workers for the jobs of the AI age.
Comprehensive social-assistance programs will be needed for workers facing long term unemployment or reduced local labor demand due to automation or industry closures.” It, however, pointed out that there will be important differences in how AI impacts emerging-market and developing economies—and thus, how policymakers there should respond.
“While workers in such countries are less exposed to AI, they are also less protected by formal social-protection programs such as unemployment insurance because of larger informal sectors in their economies. Innovative approaches leveraging digital technologies can facilitate expanded coverage of social-assistance programs in these countries,” the IMF said.
On whether AI should be taxed to mitigate labor market disruptions and pay for its effects on workers, the IMF argued that a tax on AI is not advisable. “Your AI chatbot or co-pilot wouldn’t be able to pay such a tax—only people can do that.
A specific tax on AI might instead reduce the speed of investment and innovation, stifling productivity gains. It would also be hard to put into practice and, if ill-targeted, do more harm than good,” it stated.