Article review one

Alonni Wells
Cybersecurity 201S
Oct 3, 2025


Regulating Civil Liability for AI Damages in Jordan Legislation
This article talks about how Jordan’s laws handle problems caused by artificial
intelligence (AI) and how people can be held responsible if AI causes harm. The topic
connects to the social sciences because it focuses on how laws, people, and technology
work together in society. It also looks at how fairness and responsibility play a role when
humans and machines make decisions. The author ask a few main questions: How does
Jordan deal with AI-related harm? Are there problems or missing rules in the law? And what
can Jordan learn from other countries, like those in the European Union? The goal of the
study is to find out what’s missing in Jordan’s current laws and suggest new ideas to make
things safer and more fair. The researchers used different methods to study the issue. They
looked at Jordan’s legal system, compared it to laws in other countries, interviewed
experts, and gave surveys to 100 people, including lawyers, tech workers, and government
employees. The results showed that most people don’t really know about AI liability laws,
but most of them think Jordan needs to make new ones. About half of the people think the
creators and users of AI should always be responsible if something goes wrong. The study
found that Jordan’s current laws don’t clearly explain who should be blamed when AI
causes harm. To fix this, the authors suggest creating new systems. They recommend
a hybrid liability rule that mixes strict and fault-based laws for low-risk AI, setting up
a national AI agency to watch over AI use, creating a special AI court, and making sure all AI
systems are transparent and easy to understand. This connects to class ideas
about ethics, fairness, and responsibility in technology. It also matters for regular people
and vulnerable groups, because if the laws aren’t clear, they might not get justice when AI
makes mistakes or shows bias. Overall, the article helps society by showing how AI can be
managed safely and fairly. It calls for better laws, more awareness, and stronger
protections so that technology helps people instead of hurting them.


https://cybercrimejournal.com/menuscript/index.php/cybercrimejournal/article/view/450
/130