Chinese regulators likely learned from the EU AI Act, says Jeffrey Ding, an assistant professor of Political Science at George Washington University. âChinese policymakers and scholars have said that they’ve drawn on the EU’s Acts as inspiration for things in the past.â
But at the same time, some of the measures taken by the Chinese regulators arenât really replicable in other countries. For example, the Chinese government is asking social platforms to screen the user-uploaded content for AI. âThat seems something that is very new and might be unique to the China context,â Ding says. âThis would never exist in the US context, because the US is famous for saying that the platform is not responsible for content.â
But What About Freedom of Expression Online?
The draft regulation on AI content labeling is seeking public feedback until October 14, and it may take another several months for it to be modified and passed. But thereâs little reason for Chinese companies to delay preparing for when it goes into effect.
Sima Huapeng, founder and CEO of the Chinese AIGC company Silicon Intelligence, which uses deepfake technologies to generate AI agents, influencers, and replicate living and dead people, says his product now allows users to voluntarily choose whether to mark the generated product as AI. But if the law passes, he might have to change it to mandatory.
âIf a feature is optional, then most likely companies wonât add it to their products. But if it becomes compulsory by law, then everyone has to implement it,â Sima says. It’s not technically difficult to add watermarks or metadata labels, but it will increase the operating costs for compliant companies.
Policies like this can steer AI away from being used for scamming or privacy invasion, he says, but it could also trigger the growth of an AI service black market where companies try to dodge legal compliance and save on costs.
Thereâs also a fine line between holding AI content producers accountable and policing individual speech through more sophisticated tracing.
âThe big underlying human rights challenge is to be sure that these approaches don’t further compromise privacy or free expression,â says Gregory. While the implicit labels and watermarks can be used to identify sources of misinformation and inappropriate content, the same tools can enable the platforms and government to have stronger control over what users post on the internet. In fact, concerns about how AI tools can go rogue has been one of the main drivers of Chinaâs proactive AI legislation efforts.
At the same time, the Chinese AI industry is pushing back on the government to have more space to experiment and grow since they are already behind their Western peers. An earlier Chinese generative-AI law was watered down considerably between the first public draft and the final bill, removing requirements on identity verification and reducing penalties imposed on companies.
âWhat we’ve seen is the Chinese government really trying to walk this fine tightrope between âmaking sure we maintain content controlâ but also âletting these AI labs in a strategic space have the freedom to innovate,ââ says Ding. âThis is another attempt to do that.â