Will text generated by artificial intelligence infringe on copyright?
How should traffic accidents caused by autonomous driving be penalized?
With the rapid advancement of artificial intelligence (AI), it has also brought about new legal issues.
On the 17th, California Governor Gavin Newsom signed a series of bills related to artificial intelligence, which stipulate that publishing "deepfake" campaign content around election days is illegal.
This is the most stringent regulatory law in the field of artificial intelligence in the United States to date.
Professor Franita Tolson, Dean of the Gould School of Law at the University of Southern California, said in an exclusive interview with First Financial Daily that the American legal system is still in the exploratory stage regarding how to deal with the new challenges brought by artificial intelligence.
However, she also said that the consensus among the American legal community is that an effective legal framework should match risks to the individuals, businesses, or entities best able to bear those risks.
Advertisement
Protecting human creativity, in May 2023, the Writers Guild of America initiated a strike that brought Hollywood to a standstill.
In addition to demanding higher salaries, the Writers Guild of America also called on producers to limit the application of artificial intelligence in script and film and television drama creation.
Although the strike officially ended at the end of last September, with the labor and capital sides reaching an agreement, Hollywood has not yet emerged from the shadow of artificial intelligence.
The artificial intelligence bill signed by Newsom on the 17th also responded to the demands of the California media industry.
Among them, the AB 2602 bill requires film companies to obtain the consent of actors before producing artificial intelligence-generated voice or portrait replicas.
Speaking of how to use artificial intelligence reasonably, Tolson first introduced to the First Financial Daily reporter her approach in daily legal education.
She said that when students use artificial intelligence to complete their homework, she would educate and encourage students to use it responsibly according to ethical standards, rather than rejecting new technology blindly.
She further stated that the law prioritizes the protection of human creativity.
If voices, pictures, or literary works are entirely created by artificial intelligence, then the law should not protect these works.
However, if a work is created by a combination of individual creativity and artificial intelligence, the situation is much more complicated.
There is no clear definition in the law for the protection of such works.
In addition to disputes over specific cases, the application of artificial intelligence has long transcended national boundaries.
There is a big difference in the principles and methods of artificial intelligence regulation among countries.
At the same time, artificial intelligence policies are closely intertwined with factors such as geopolitics and economic competition, making the transnational governance of artificial intelligence "even more difficult".
In this regard, Tolson said to the First Financial Daily reporter that although the technology is new, the essence of the problem is still old.
Conflicts between different countries on different issues are not new, and artificial intelligence is just a new field of conflict.
She further stated that the problem lies in the fact that we have not formed a good solution to the old problems, and the emerging artificial intelligence has added a layer of complexity to the original problems.
What we should think about is how to apply old rules innovatively and create new rules to solve the current problems.
Of course, the regulation of artificial intelligence is not only in the field of creativity.
At present, the global automotive industry is undergoing profound changes.
With the help of artificial intelligence, autonomous driving has become a core technology that major global car manufacturers are focusing on.
However, autonomous driving has also caused many accidents on the road, creating new legal issues.
On March 18 this year, an Uber autonomous driving car hit and killed a middle-aged woman crossing the road in Tempe, Arizona, USA.
This is also the world's first fatal accident involving an autonomous driving car on a public road.
The vehicle involved in the accident was a Volvo car equipped with Uber's autonomous driving system, and it was in autonomous driving mode at the time of the accident, with a driver in the car who could intervene manually in emergencies.
However, the local police said that after watching the car's monitoring, they did not find that the autonomous driving system or the driver had taken emergency measures to avoid the accident.
Regarding the risks generated by autonomous driving, Tolson said to the First Financial Daily reporter that she advocates that the legal risks generated by autonomous driving should be borne by the development companies, even if individuals may act irresponsibly when using artificial intelligence technology.
She gave an example that Tesla has an autonomous driving option, but during the driving process, it always requires the driver to keep his hands on the steering wheel.
If the driver moves his hands away many times, the car will disable the autonomous driving function.
Tolson further analyzed that Tesla's setting is because it recognizes that if the autonomous driving technology causes an accident, Tesla will be held responsible.
"From a legal perspective, this makes sense, because the specific technology is developed by the company," Tolson said.
Leave a Comment