Research studies

The role of artificial intelligence in conflict resolution


Prepared by the researcher  :  sayed tantawy Mohamed  – Master’s degree in public international law and doctoral researcher

Democratic Arab Center

Journal of Afro-Asian Studies : Tenth Issue – August 2021

A Periodical International Journal published by the “Democratic Arab Center” Germany – Berlin

Nationales ISSN-Zentrum für Deutschland
ISSN  2628-6475
Journal of Afro-Asian Studies

:To download the pdf version of the research papers, please visit the following link


This article proposes the main aspects of artificial intelligence and their implications for international law. It also deals with the current artificial intelligence revolution and shows the arguments from which the current international law for managing artificial intelligence in times of conflict is drawn.


The Internet and the technological advancement in information and communication
technologies (‘ICTs’) have significantly altered the way business is conducted and led to
an ever-increasing use of electronic instead of paper-based means of communication and
data storage. Such ICT revolutionary and innovative applications have been equally
extended to the Justice system in a manner that has transmogrified, and continues to do
so, in-court and out-of-court dispute resolution techniques and schemes to ensure efficiency, fairness and swift resolution of ensuing disputes

research importance

Creativity is a fundamental feature of human intelligence, and an inescapable challenge

for AI. Even technologically oriented AI cannot ignore it, for creative programs could be

very useful in the laboratory or the market-place. And AI-models intended (or considered)

as part of cognitive to understand how it is possible for human minds to be creative The earliest efforts to apply AI in the legal context date back at least to 1970 Many attempts were ambitious in terms of complexity and capabilities. Some initiatives sought to create computerized judges that could perform complex legal despite the failure to achieve widespread adoption of AI in the legal realm, explorer IN this area should continue

As access to justice problems enter a ‘crisis level, technology continues to progress. The JPES is meant to be a modest businesslike and realistic system for implementation and adoption on a wide scale. If it succeeds, it could help to advance the development of AI justice technologies while also enhancing access to justice and ODR processes Shifting the orientation from highly advanced to more modest systems in the legal realm will parallel

the evolution of AI generally According to technology author and journalist Steven Levy, the failure of initial, highly ambitious AI efforts led to a ‘Winter’ where no projects or visions could ‘grow.’ In his view this winter was followed by a reorientation towards processes where computers were highly proficient. In Levy’s words: “[…] as the traditional dream of AI was freezing over, a new one was being born: machines built to accomplish specific tasks in ways that people never could. In the justice context, the JPES is consist Tent with a shift to simpler, less sophisticated, more practical AI products that herald a new Spring, characterized by the delivery and deployment of these sys teems; The use of Artificial Intelligence during war is nothing novel but with advancement of AI technologies it poses many challenges to International Humanitarian Law (IHL) and International Law. This need to be discussed in detail to solve the legal and ethical issues related to use of this technology during war. AI is that branch of science and engineering that is based on building intelligent machines capable of acting like humans in any complex environment including warfare.

1 AI is rapidly becoming the focus of international economy, and it is seen as a new engine of social and economic development where AI has become a frequent topic of discussion among scholars in recent times; About its potential advantages and disadvantages and its uses in various sectors related to the implementation of its tasks and during the war as well

Search problems

While the current capabilities of digital devices are enormously impressive, future increases

in power and reductions in cost are inevitable Courts, in the mid-1990s, were beginning to struggle with jurisdictional questions such. as where an event occurred if parties were in different places and were interacting online.
Many of the legal questions surfacing at the time, however, while interesting, were largely
irrelevant to persons who found themselves involved in a dispute arising online. In the
vast majority of situations where parties were in different places, land based courts and
systems were not really useful options for persons who felt aggrieved The network’s rapid communication and information processing capabilities, however, did open up opportunities for creative approaches and responses to problem solving for cases that did not go to court. In other words, many of the same forces that contributed to disputes could also be employed to resolve disputes. Today, there is little doubt that there is an ongoing and growing need for ODR. There are indeed large numbers of disputes stemming from online activities; in fact, there are a greater numbers of disputes than anyone predicted. As will be noted later, eBay itself claims to have handled over sixty million disputes during 2010. In addition, over this period of time, how and when ODR is being used has also expanded. Without neglecting the need to respond to disputes arising online, ODR has also been focusing attention on traditional kinds of disputes occurring offline. More to the point, the boundary line between the online and offline worlds is, as “the digital world merges with the physical world” much less clear than it used to be. As a result, the challenge of ODR currently is less focused on where the disputes originated than it is in finding tools and resources that can be as effective in any dispute regardless of where it originate

I The role of artificial intelligence in armed conflict

Since the adoption of the four Geneva Conventions in 1949, which each contain
‘common Article 3’ regarding non-international armed conflicts, the law of armed
conflict has been seen to be binding on all ‘parties to the conflict’, whether State or
non-State actors. The scope of common Article 3 is limited, as was the extent to
which non-State groups were bound by the law of armed conflict, but the law
applicable to non-State actors has developed over the ensuing decades. It began
with the drafting of Additional Protocol II to the Geneva Conventions, applicable
to certain kinds of non-international armed conflict, and continued with the
development of customary international law. Obligations under international human rights law are directed specifically at States. Attempts are sometimes made to extend them to non-State actors. Numerous resolutions of the United Nations Security Council refer to the human rights obligations of all parties, even in non-international armed conflicts. However,
while non-State actors may have obligations imposed by both domestic and
international criminal law, it is not yet clear that all non-State armed groups,
particularly those which do not exercise control over territory, are bound by general
human rights law. In any event, even if non-State actors are bound by some
obligations under human rights law, they are not subject to the same international
judicial mechanisms for the enforcement of human rights law that States might be.

The use of Artificial Intelligence (hereinafter referred to as ‘AI’) during war is nothing
novel but with advancement of AI technologies it poses many challenges to International
Humanitarian Law (IHL) and International Law. This need to be discussed in detail to
solve the legal and ethical issues related to use of this technology during war. AI is that
branch of science and engineering that is based on building intelligent machines capable of acting like humans in any complex environment including warfare.1 AI is rapidly
becoming the focus of international economy, and it is seen as a new engine of social
and economic development. AI is a frequent topic of discussion among scholars in recent times; about its probable advantages and disadvantages and its uses in various sectors especially during war or other combatant activities. It is quiet probable that AI may be used to design new weapons, may help in identification by voice and image recognition, patrolling and evidence collection. The People’s Republic of China has already announced that by 2030
extensive use of AI may take place in military and security installations.4 Under this
policy document, China also discussed about formulating laws, regulations and ethical
norms related to AI, by amending existing laws to be in consonance with AI, Designing
Intellectual Property Rights standard to support AI development and most importantly

create an infallible safety regulation and assessment system for AI expansion. The US
military is planning to have more number of robot soldiers than human soldiers by the
year 2025. These combat robots would most likely be an inherent part of the US fighting strategies within next 10 to 15 years. These plans of the world superpowers clearly
reveal that nations have a well-defined roadmap for the future of Artificial Intelligence
and plan of using it during war and conflict situations

II.   Legal Status of Artificial Intelligence and Legal Liability in Terms of Application of its Systems

The use of AI during war poses many queries in the mind of researchers,                                As to what will happen in case criminal liability arises due to the act of AI, who will be    Held responsible for such an act, who will bear individual criminal responsibility that, is The inventor, the programmer, the military commander under command responsibility .

Or AI-device programmer under individual criminal responsibility.6 further, a question that arises is how can AI device distinguish between who is combatant and who is hors De combat, whether these AI machines can arrest someone as prisoners of war, and can a combatant surrender to an AI device? These are just few set of questions  which need to be answered before countries think of using AI machines and devices for war purposes.

 AI war machines and devices have to go through the checks of military necessity                proportionality and distinction before these can be used for combatant activities. As            Most of the principles of Geneva Conventions are treated as customary international

Law, it becomes very important to predict all probable losses and damages that can happen because of AWS playing combatant role It is probable that these AI weapons can perform combatant activities during war. better than human beings but there will always remain the fear that these advanced machines may decide to do things which they are not instructed to do or perform which humans have never thought of or thought of them to be incapable of doing.11‘Black box’ problem also creates a fear that these AI-based machines may not follow
Geneva Conventions and Customary International Law at all times  The functions that AI can perform during warfare are dual in nature; firstly, it can be used for combatant activities, and secondly for providing humanitarian assistance to people, who are hors de combat. AI can also be really useful in natural, chemical and biological disasters (no doubt its precision and legality are always debated). Few researchers also criticize AI on the aspect that rather than giving full control to AI weapons, it is much better if it provides support to human actions
during war or humanitarian assistance. This is because a human decision having  gone through the process of Observe, Orient, Decide, Act (OODA) Loop, which in a simple way means observe, orient, decide, act and if AI is performing all the four stages, the chances

 of IHL violations are added. The best solution to such a problem is that AI or AWS should perform functions of observe and orient but the function of decision-making should be performed by a human controlling it and AI can execute the final act. If there is a chance to make smart machines errors, artificial intelligence can be located within the possibility, or creates unintended consequences of its actions in the pursuit of targets that seem harmless. One of the scenarios for the committed artificial intelligence is what we have already we have already in movies such as the Terminator and television programs where AI becomes a high-intelligent centrifugal aware and decides that he does not want human control anymore Experts now say the current artificial intelligence technology is not yet able to achieve this very serious achievement of self-awareness; However, future giant computers may be for artificial intelligence

 Technology, Dispute Resolution, and the Fourth Party III.

Online dispute resolution​ (ODR) is the use of information and communication technology to
help people prevent and resolve disputes. ODR, like its offline sibling alternative dispute
​ (ADR), is characterized by its extrajudicial nature. In a sense, dispute resolution is
defined by what it is not: it is not a legal process. Any resolution outside of the courts is dispute resolution. If you and your counterparty decide to resolve your dispute by consulting tarot cards  that is alternative dispute resolution. If you decide to resolve your dispute with a game of checkers, that is also alternative dispute resolution. However, if you decide to resolve your dispute with a game of online​checkers that is online dispute resolution. Either way, in the dispute resolution world, we paint with a pretty big palette As ODR has developed over the past 20 years, a few core concepts have emerged. One of the most foundational concepts is that of the “fourth party”. Originally introduced by Ethan Kats hand Janet Rifkin in their book ​Online Dispute Resolution​,1 the fourth party describes technology as another party sitting at the table, alongside party one and party two (the disputants) and the third party (the neutral human, such as a mediator or arbitrator). You may be forgiven for picturing the fourth party as a friendly robot sitting next to you at the negotiating table and smiling patiently. Bear in mind, though, that this fourth party could just as easily be a black cylinder sitting on the table – ​a la​ Amazon Echo – or just software floating somewhere in the cloud. The form of the fourth party is irrelevant to the function the fourth party provides. The fourth party can play many different roles in a dispute. In most current ODR processes, the fourth party is largely administrative, handling tasks like case filing, reporting on statistics, sharing data, and facilitating communications. We ask our friendly fourth party robot to take notes, or to dial in someone who could not join us at the table in person. But it is obvious to those of us in the ODR field that the fourth party is capable of much more. While we humans pretty much work the way we always have, with our cognitive biases and attribution errors, computers are getting more powerful all the time. It is inevitable that at some point we will ask our fourth party robot to help us resolve our issues, or maybe even to just handle it for us outright. The artificial intelligence is a key dimension. Artificially intelligent entities may be considered to operate lawfully as long as they are subject to human control; autonomous
weapons systems may soon become unlawful, unless they are controlled by humans.
Whereas the law direct concerned by these manifestations of artificial intelligence
(company law and the law of armed conflict, respectively) induces indications of how
control tissue can be solved – think of the laws regulating a company board’s control
over management or the chain of command in armed forces – international law has
available a rich jurisprudence with respect to control, which has accumulated over the
years in the most diverse situations. This experience should be drawn upon to shed
some light on the puzzling control issues associated with artificial intelligence. Ruth party is just getting started The most obvious source in this regard is the case law of the international criminal courts. May of the cases before the International Criminal Tribunal for the Former
Yugoslavia deal precisely with control: When and under which conditions is a commander high up in the hierarchy responsible for deeds on the ground? When are instructions Safe

 precise to warrant attribution? When are the tasks divided among several actors so strongly linked that they may be considered as one entity joint criminal enterprise ?30 The answers this case law gives to such questions may contain dues of how control over artificial intelligence can be practically tackled. The fact that in international criminal law humans are excursively involved, while with artificial intelligence one or more humans interact with machine, shouldn’t be a reason not to draw on the tribunal’s record. Communication theory, at least, point that correspondence with an arterially intelligent agent need not necessarily b fundamentally different from soda intercourse among humans General international law may not speak to control as directly as international criminal law. However, the situations the world Court has addressed are even richer, more diverse, and thus more informative Consider the following two examples stemming from the time other League of Nations. They are just little among dozens. i) In 1931, Austria had concluded a treaty with Germany establishing a customs union. Soon thereafter, the Permanent Court of International Justice was asked to
I bear you an opinion on whether Austria violated its international obligation laid down in
previous treaties (inter alia the peace treaty of Saint-German-en-Layoff 10 September 1919) not to alienate its independence. The Court concluded that Austria had parody   violated this obligation. The opinion provides an illustration of what it means not to alienate one’s independence. Is this not also relevant for humans who are prone to alienate their independence and subject themselves to the ‘will’ of artificial intelligence? ii) In 1930, the same Court indicated in an Opinion that the Free City of Danzig was precluded by its status that was secured inter alia in its constitution, which in turn was guaranteed by the League of Nations, from joining the International Labor Organization.34 The following quotes from the opinion are evidence of the Opinion’s relevancy: “The result is that, as regards the foreign relations of the Free City, neither Poland nor the Free City are completely masters of the situation. And: “[…S]o far as these rights involve a limitation on the independence of the Free
City, they costitute organic limitations which are an essential feature of its political structure. Some questions follow naturally, namely what does it mean to be the complete master of situation involving artificial intelligence or what are the organic limitations of our artificial intelligence’s structure? Overall, the opinion offers an illustration ofhow to structure a situation ofcompeting interests and mutual dependency – which is just the point to be addressed with regard to artificial intelligence the flipside of retaining control over somthing is that some decisions must not be delegated. The persons controlling an artificial intelligence necessarily need to take some decisions themselves, else they would not be in control. The questions to be answered then are: Which decisions are these? What is it that cannot be delegated to a machine? With regard to autonomous weapons systems, the consensus seems to be that the decision to kill a human person in concrete combat situation cannot be delegated to a machine. With regard to the artificially intelligent entities discussed Benefit, the answer may be that a human per so~ needs to be chargeable in case crimes are committed; criminal responsibility thus cannot be delegated. A search for other such limits of delegation in international law reveals again an interesting decision by the Permanent Court of International Justice, namely Consistency certain Danzig Legislative Decrees with the Constitution the Free City. The legislative organs of Danzig had introduced a very general penal norm, Authorization of authorities to sanction individuals when an act deserved penalty according to fundamental conceptions of penal law and sound popular feeling. The Court in·1935 advised that such a norm, in moving beyond the nulled crimes principle, violated the fundamental rights of individuals and the rule of law. In the perspective of artificial intelligence, the ruling suggests that the discretion delegable to an individual- which in the case of the Permanent Court was an individual judge, while in the case of artificial inte11igence it is a synthetic individual- may be limited. The orders of the human principal gives may have to be specific, the space the digital agent can fi11 may have to be lime

     This leads to the question: Should we commit to the basic pririciple that artificial intelligence must be precluded from taking discretionary decisions?

Certain Danzig Legislative Decrees and the other cases discussed at the broader task ahead for lawyers. Like the foregoing international case law, the human rights case law of the European Court of Human Rights, which is much thicker, is likely flush of the kinds of dues offered by Certain Danzig Legislative Of belief. In addition to the overarching perspective on delegation and control in general, guidance can be drawn from the case law anyone regard to each specific fundamental right. In the same way that Certain Danzig Legislative Decrees speaks tonullum crimes, thus enabling inference for artificial intelligence, the case law of the European Court of Human Rights speaks to the right to life, the prohibition of torture, etc.

And in each case implications for artificial intelligence are likely. Examination the case law

Of the European Court of Human Rights is a mammoth task (not to think of the case
law of national courts!). However, in addition to exposing the limits of delegation
and control!, it will become evident where au legal systems are vulnerable to the artificial
intelligence revolution under way. Data protection is already further ahead in
coming to grips with the consequences of artificial intelligence because it threatens
digital privacy most directly. But the law governing companies, contracts, banking,
agency – not to speak of public law in general- is nowhere near that far. Ire therefore
seems urgent to start to look at the case law through the. Lens of artificial intelligence

IV. Legal Regulation of AI-based Technologies:

There is big lag in the development of digital and other information technologies in Egypt in comparison to developed countries. According to the data of the Federal Program “Digital Economy,” the Russian Federation ranks 41st in readiness for the digital economy, showing

 a significant distance from the higher rankings of dozens of leading countries such as Singapore, Finland, Sweden, Norway, the United States of America the Netherlands, Switzerland, Great Britain, Luxembourg and Japan. From the point of view of economic and innovative results of using digital technologies Russia ranks 38th far behind leading countries such as Finland, Switzerland, Sweden, Israel, Singapore the Netherlands, the United States of America, Norway, Luxembourg and Germany In the view of many experts in the field, such a significant lag in the development of the digital economy is explained by the gaps in the regulatory framework for the digital economy and an insufficiently favorable environment for doing business and stimulating innovation, and, as a result, a low level of digital technologies by business structures and adequate legal basis for the tools and mechanisms which allows attracting investments and innovators There are many reasons for this, and they are not, in principle, reasons related to the creation of law in a given country. Knowledge will always be the key to the development of new technologies. For example, in 2019 India introduced the subject artificial intelligence” into the curriculum of its schools, an event which was picked up and reported by the media around the world. It should be expected in the future that such activities will also have an impact on creating law in this country A characteristic feature of the countries under study is that the first measures at the government level were undertaken relatively recently. The period of the end of 2017 and the start of 2018 was crucial in this area, when the first policies and reports in the field of artificial intelligence were created

 and special funds for research, education and training were involved. Little time has passed, which is why a common feature of all of the BRICS countries is that none of them has special legal regulations in the field of AI. Yet, we can cite only a small number of examples of individual countries in which legislative initiatives are emerging; and this may result in the development of new legal regulations at the end of 2020 or early 2021. At the moment, the interest of countries in addressing  the issues relating to AI is also compounded by the achievements of other countries and international organizations. National agencies in their reports indicate that in legal regulations and activities they will consider, for example, Organization for Economic Co-operation and Development (OECD) principles in the field of AI. This also applies to declarations from some BRICS countries (e.g. the AI policies of Brazil). Brazil clearly indicates that it implements OECD recommendations. On the other hand, it can also be concluded from the actions of other countries that they are in line with
the OECD strategy in this area Also and robotics are proving to be valuable tools to assist caregivers, support elderly care and monitor patients’ conditions on a real time basis, thus saving lives. AI has the potential to be a great tool to fight educational inequalities and create personalized and adaptable education programs that can help people to acquire new qualifications skills and competences, according    to individual ability to learn. Artificial intelligence already has an important impact on the EU economy and GDP growth In addition, AI is being used to improve financial risk management and provide the tools to manufacture, with less waste, products tailored to our needs. Moreover, AI helps to detect fraud and cyber security threats and enables law enforcement agencies to fight crime more efficiently Yet, as with any new technology, the use of AI brings risks. The citizens of the

European Union fear being left powerless in defending their rights and safety when facing the information asymmetries of algorithmic decision-making. Entrepreneurs are concerned by legal uncertainty within the European Single Market. Artificial Intelligence has the potential to do both material harm – for instance in relation to the safety and health of individuals, including loss of life and damage to proper and immaterial harm – such as loss of privacy, limitations on the right to freedom of expression, human rights, dignity and discrimination – and can relate to a wide variety of risks

V.  Artificial Intelligence and Information Technology in Training and Professional Activities of Lawyers

Artificial intelligence day after day is approaching that everything is an observer and microcontroller in favor of maintaining national security for States from terrorist attacks and citizens from various crimes. Artificial intelligence today can identify the faces and identity of people in streets, stations and hotels through surveillance cameras and can identify any telephone calls posing a threat or is a criminal talk between wanted, all this happens at record time with millions of citizens who move daily and with millions Telephone calls and very highly efficiently This is facilitated by government agencies and security. It is true that we have not reached the total control of everything but artificial intelligence aspires to reach an integrated system where he is watching: all emending and all the places where e-cards are used for purchase and all reservations in hotels, restaurants, airports and stations, all Motor numbering panels in the streets, all audio calls, all conversations from public places by special microphones planted there, all people who are present in a specific place such as streets, public squares, stadiums, mosques and others by surveillance cameras, all hospitals and list of people To injuries or accidents at a specified time, all courts and issues that are taught and the relationship of persons identified by all press reports and social networking publications. All these data are gathering and analyzed to extract the relationship between a people or some targeted persons and a subject they share. Features of faces during meetings and phrases that can be taken, different correspondence or calls and various places where they are present and various remittances they have made and their relationships with issues in the courts or previous criminals or a file for existing or previous social problems through which it can be concluded And predicted the actions of specific persons after the intelligence system and the nomination of important things related to security and crime to limit the research and investigation into specific persons followed and monitored until their piping before making any crime and so we had a proactive step against terrorism And crime The legal framework of arbitration does not, in itself, bar the use of legal technologies by arbitrators, parties, and their counsel. Arbitration is indeed contractually based, and arbitrators, with the consent of the parties, enjoy significant freedom in directing fact-finding and in case-management. Article 19 of the United Nations Commission on International Trade Law (UNCITRAL) Model Law on International Commercial Arbitration thus provides that the parties are free to agree on the procedure to be followed, failing which the arbitrator will ‘conduct the arbitration in such manner as [she or he] considers appropriate’, including the issues of ‘admissibility, relevance, materiality and weight of any evidence ‘Furthermore, arbitrators enjoy considerable freedom in their role as fact-finders. For instance, Article 25 of the ICC Arbitration Rules gives arbitrators recourse to a broad range of means for establishing the facts of the case, neither specifying nor preferring any single method: ‘The arbitral tribunal shall proceed within a short time as possible to establish the facts of the case by all appropriate means.

Despite the lack of legal barriers, resistance to technology and artificial intelligence persists, flourishing on concerns that such technologies will usurp arbitrators’roles. It is, however, improbable that technology could completely replace arbitrators. It bears emphasizing that almost all existing national, international, and institutional laws and rules envisage that arbitrators must be human. Some jurisdictions even explicitly provide that such a role could only ever be entrusted to physical persons Nevertheless, the UNCITRAL Model Law on International Arbitration and its preparation works do not include a specific definition of arbitrators. In reliance on this legal loophole, some commentators have advanced
the possibility of appointing computers and programs as arbitrators

However, without mentioning all the liability and disclosure issues that such an appointment
would raise, an automated agent acting as arbitrator would lack the key human characteristics of emotions, empathy, morality, the ability to explain decisions, and the ability to decide ex aqua et bono. Even though in principle it is not impossible to entrust part of an arbitrator’s mission to automated agents when it comes to the logical assessment involving fact-finding, the ‘sociological print’ is a key component of the mission that only humans can perform.10 It is indeed arguably required that arbitrators possess human characteristics such as capacity, impartiality, and independence. It is therefore generally accepted that arbitration cannot be fully automated by artificial intelligence the use of digital technologies and artificial intelligence in arbitration poses various challenges due to the characteristics of arbitration. These include issues concerning confidentiality, due process, the arbitrator’s role, and potential for decreased flexibility. There are two aspects relating to the issue of confidentiality, each appearing at different points in the arbitral process: first, the question of access to precedent, i.e., arbitral awards or procedural orders, during the preparation phase, notably in counsel’s preparation of written pleadings; and second, the question of the external
input necessary to operate technologies during proceedings The first aspect is most salient in commercial arbitration, as commercial arbitral awards are usually confidential – unlike in most investment treaty arbitrations. One way to overcome this challenge, however, is to access information directly via arbitral institutions. for instance the legal tech Dispute Resolution. Data has built its case law database with the cooperation of twenty arbitral
institutions. In order to avoid any confidentiality issues, the arbitral institutions upload the data themselves, ensuring that the names of the parties and other sensitive details remain confidential The second aspect related to confidentiality is that any recourse to digital
technologies or artificial intelligence involves some external input, meaning that
ultimately, humans external to the arbitration proceedings will program and handle
these technologies

  1. Hangzhou Internet Court

The Hangzhou Internet Court in China seeks to move the entire litigation process to the Internet, including prosecution, filing, proofing, court hearing, and ruling The online process brings disputants across the country together to increase efficiency and “save judicial
resources The court has a broad reach to cover copyright,. Contract disputes related to e-commerce, product liability, internet service provider disputes, conflicts over loans
obtained online, and domain name disputes. Experts have viewed the court as one of the most ambitious of its kind The court’s process begins when the plaintiff registers on
the site and is verified as a legitimate claimant. The plaintiff fills out an online form describing the conflict and allows the Internet Court to retrieve the case information.
Each party obtains a “My Litigation” tab and enters a “query code” provided in the notice in order to review the complaint Within fifteen days of filing the case, a mediator contacts both parties and conducts pre-trial mediation via the internet, phone or videoconference. If mediation fails, the lawsuit goes to the court’s “Case Filing. Division” where the parties can track the case, and gather information about similar cases in order to determine likely outcomes that may assist them in reaching settlements before litigation As of February 2018, the experience in the four Hangzhou courts hearing online cases has been” encouraging” for advancing efficiency. During its first year, the court received filings for over 6,000 cases, of which about two-thirds were resolved or dismissed through online means. Participation is voluntary and defendants can demand that the case be heard off-line Typical cases involved purchases from large e-commerce companies based in Hangzhou, which include Alabama, Tabaco and Net Ease. This has caused some concern regarding power imbalances,
as well as questions regarding the influence that these ecommerce giants may have in the court itself Nonetheless, the Hangzhou Internet Court has been so successful in creating efficiencies that China plans to set up internet courts in Beijing and Guangzhou, according to a statement from China’s Supreme People’s Court (SPC). Furthermore, the Hangzhou Court is setting trends broadly in consideration of technology’s role in litigation. Recently,
the court in Hangzhou became the country’s first to accept “legally valid electronic evidence using block chain technology. The plaintiff in an infringement case conducted an automatic capture of infringing webpages and the source code through a third-party platform

and uploaded them and the logs to Truth block chain for document verification The court accepted this means for submitting evidence, after finding that the block chain technology
complied with relevant standards to ensure the reliability of the electronic data. Chinese courts require strict verification procedures, and this case established that block chain can be used as a legal method to determine the authenticity of an item of evidence, similar to a traditional notarization service commonly used in China

VII. The role of artificial intelligence to protect civilians in war

In the event that an armed conflict has no international nature in the territory of one of the High Contracting Parties, each party is committed to conflict to apply as a minimum of the following:

1-Persons who do not participate directly in hostilities, including members of the armed forces who have shown their weapons, and arid persons for fighting because of disease, wound or detention or for any other reason, in all cases are treated, without any harmful discrimination based on the element Or color, religion, belief, sex, generator, wealth or any other similar

For this purpose, the following acts are prohibited for persons mentioned above and remain prohibited at all times and places

(A) Assault on life and physical safety, in particular murder in all its forms, distortion, cruel treatment, torture

(B) Take the hostages

(C) Assault on personal dignity, in particular, degrading and surrounding dignity,

(D) Issuing provisions and the implementation of sanctions without a previous trial before a legal problem tribunal and ensure all necessary judicial guarantees in the eyes of the emanate peoples

Combines the wounded and patients and takes care of them-2

An unsanitary humanitarian body, such as the International Committee of the Red Cross, may expose its services to the parties to the conflict The parties to the conflict should be above this, through special agreements, to implement all other provisions of this Convention.

It is not in the application of advanced provisions that affect the legal status of the parties to the conflict The Geneva Conventions AP-I Article 36 to the Geneva Conventions, burdens all State parties, which are using any new weapon or technology in warfare to meet the abovementioned criteria. AI has to ensure that the new weapon does not cause superfluous
harm or unnecessary suffering of disproportionate and indiscriminate nature. The use of
AI should always be for military necessity and military advantage. Furthermore it should
also adhere to all obligations which the State has undertaken under various International
Treaties and Customary Law The AI should also be capable of understanding the nature
of hostile, hors de combat, how to deal with surrender and how to seize any person,
weapon or property. Though machine learning of high grade can make the above things
possible, the chances of errors because of black-box syndrome are quite high and cannot
be ruled out as the same may not comply with the norms of IHL. As per the weapon
review procedure provided under Article 36 of the Additional Protocol, the obligation of
weapon review is on the States willing to introduce such weapon in war and the State will
create an internal committee to review the weapon according to Additional Protocol-I,
but only a few countries have a well-developed weapon review procedure. But the question and controversy remains for some in the event that artificial intelligence or a machine commits a mistake. Will the responsibility be on the manufacturer or the user of the machine, as he is the watchdog over its operation and is treated as a guardian of things? The researcher believes that in this context, the legal responsibility should be on the user of the machine; because he is the actual controller It has a guard over things that he has actual power over AI weapons must not cause relentless destruction to the environment and the principle of just war, and the principle of proportionality must be observed while programming these weapons. In addition, artificial intelligence weapons must be compatible with the Geneva Convention Additional Protocol I Convention Article 51 Should It must be compatible with the rules of international humanitarian law

– The necessity of enacting a law regulating the work of artificial intelligence we can easily imagine a discussion about what the various future legal consequences will be if today we do not pass a certain law and Defining the role of artificial intelligence

-The law has a long history of dealing with oversight and delegation. The case law of international courts abounds with issues of oversight and delegation. For example, but The case law of the International Court of Justice, the International Criminal Courts,Regional human rights courts are likely to be littered with these cases, so there should be decisions governing the work of AI in such cases

-Artificial Intelligence should, at all times, take care of IHL not just because of obligation
under the Geneva Convention but also as a part of the Customary International Law
The Customary International Law of warfare puts negative duty on any State using AI
during the war to refrain from violating IHL including the direct responsibility not to
support or assist the commission of contravention. The best way of weapon review is
to take all stakeholders including the military lawyers, AI developers, testers and end
users together during the testing and evolution stage.


1-K. Binsted, Machine humors: an implemented model of puns, Ph.D. Thesis, University of Edinburgh, 1996

2-M.A. Bowden, The Creative Mind: Myths and Mechanisms, Basic Books, New York, 1990

3-Margaret A. Bowden. Creativity and artificial intelligence. Artificial Intelligence 103(I 998) 347-356


NOAM LUBELL.ARMED CONFLICT. Basic Books. Published in association with the Royal Institute of International Affairs (Chatham House)Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries2016

5- Black box is a problem in AI devices due to algorithm-based probabilistic reasoning based on highly complicated statistical operations or geometric relationships that humans cannot visualize or predict. This creates a ‘black box’ problem that humans never come to know how these machines have come to a decision and which algorithms they applied before coming to a decision

CHALLENGES FOR INTERNATIONAL HUMANITARIAN LAW. See discussions, stats, and author profiles for this publication at:

7-Gauthier VANNIEUWENHUYSE. . Arbitration and New Technologies: Mutual Benefits

8- See Andreas Responded, Five Proposals to Further Increase the Efficiency of International Arbitration Proceedings, 31 JIntlArb 507 (2014); A. V. Schläpfer & M. Paralika, Striking the Right Balance: The Roles of Arbitral Institutions, Parties and Tribunals in Achieving Efficiency in International Arbitration, 2 BCDRIntlArbRev 329 (2015); J. Kirby, Efficiency in International Arbitration: Whose Duty Is It?, 32 JIntlArb 689 (2015).

9- Ethan Katsh A Few Thoughts About the Present and Some Speculation About the Future

10- Ethan Katsh & Janet Rifkin, ​Online Dispute Resolution​ (Jossey-Bass, 2001)

11- Dave Orr and Colin Rule. Artificial Intelligence and the Future of Online Dispute Resolution
12-See id. at 14. For example, in one case, a Chinese plaintiff bought acollectible battery-powered bank on Taobao, a popular shopping site, and tried to return it because the product was a counterfeit. Id. Next, he sued Taobao claiming breach of contract for allowing a seller to market counterfeit goods, but the court dismissed the claim. Id

13- Amy J. Schmitz. Expanding Access to Remedies through E-Court Initiatives. VOLUME 67 JANUARY 2019. Legal Studies Research Paper Series Research Paper No. 2019-07

14- Darin Thompson Creating New Pathways to Justice Using Simple Artificial Intelligence and Online Dispute Resolution Research Papers, Working Papers, Conference Papers.

Osgoode Hall Law School of York University Osgoode Digital Commons

15- Order of the Government of the Russian Federation No. 1632

16-Peter J. Bentley et al., Should We Fear Artificial Intelligence? In-Depth Analysis, European Union (March2018) (Dec. 23, 2020), available at
17- Thomas Burri, International Law and Artificial Intelligence, 60(1) Ger. Yearb. Int’l L. 91 (2019).

AnD THE EuRoPEAn union.  DAMIAN CYMAN, Gdansk University (Gdansk, Poland) BRICS LAW JOURNAL Volume VIII (2021) Issue 1

19- Permanent Court ofJustice (PCI)), Customs Regime Betwem Germany and Austria, Advisory Opinion of5 September 1931, Series AlB, No. 41.

20- THOMAS BURRI, International Law and Artificial Intelligence, Electronic copy available at:

3.5/5 - (6 أصوات)

المركز الديمقراطى العربى

المركز الديمقراطي العربي مؤسسة مستقلة تعمل فى اطار البحث العلمى والتحليلى فى القضايا الاستراتيجية والسياسية والاقتصادية، ويهدف بشكل اساسى الى دراسة القضايا العربية وانماط التفاعل بين الدول العربية حكومات وشعوبا ومنظمات غير حكومية.

مقالات ذات صلة

اترك تعليقاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *

زر الذهاب إلى الأعلى