Scientific News | Study reveals whether the military can rely on AI for automation

Atlanta [US]Jun 18 (ANI): According to a recent study by the Georgia Institute of Technology, the military cannot rely on artificial intelligence (AI) for its strategy and decision-making, research argues that only limited things can be automated and also on the importance of human discernment.
The findings of the study were published in International Security.
Read also | Kajal Aggarwal Birthday: Did you know the famed Magadheera actress shared screen space with Aishwarya Rai in her debut film?.
“All hard problems in AI are really judgment and data problems, and what’s interesting about that is that when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war,” said Jon Lindsay, associate professor at the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs.
“You need human sense and to make moral, ethical and intellectual decisions in an incredibly confusing, tense and frightening situation.” AI decision-making is based on four key elements: data about a situation, the interpretation of that data (or prediction), determining the best course of action in accordance with goals and values (or judgment), and action. Advances in machine learning have made predictions easier, making data and judgment even more valuable.
Read also | Robert Lewandowski Transfer News: Barcelona are preparing an improved offer for the Bayern Munich striker.
Although AI can automate everything from trade to transit, judgment is where humans need to step in, Lindsay and Professor Avi Goldfarb from the University of Toronto wrote in the article “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War”, published in International Security.
Many policymakers believe that human soldiers could be replaced by automated systems, which would ideally make the military less dependent on human labor and more effective on the battlefield.
This is called the substitution theory of AI, but Lindsay and Goldfarb argue that AI should not be seen as a substitute, but rather as a complement to existing human strategy.
“Machines are good for prediction, but they depend on data and judgment, and the hardest problems in war are information and strategy,” he said. “The conditions that make AI work in commerce are the most difficult conditions to meet in a military environment because of its unpredictability.”
One example highlighted by Lindsay and Goldfarb is mining company Rio Tinto, which uses self-driving trucks to transport materials, reducing costs and risks for human drivers. There are abundant, predictable, and unbiased data traffic patterns and maps that require little human intervention unless there are road closures or obstacles.
Warfare, however, generally lacks abundant unbiased data, and judgments about goals and values are inherently contentious, but that doesn’t mean it’s impossible. Researchers say AI would be best employed in bureaucratically stabilized, task-by-task environments.
“All the excitement and fear is about killer robots and deadly vehicles, but the worst case scenario for military AI in practice will be the classic militaristic issues where you really depend on creativity and interpretation. But this what we should be looking at are personnel systems, administration, logistics and repairs,” Lindsay said.
According to the researchers, the use of AI also has consequences for the military and its adversaries. If humans are central to deciding when to use AI in warfare, the structure and hierarchies of military leadership could change depending on who is responsible for designing and cleaning data systems and making political decisions. . It also means that adversaries will seek to compromise both data and judgment, as they would largely affect the trajectory of war.
Competing against the AI can cause opponents to manipulate or disrupt data to make judging even more difficult. Indeed, human intervention will be even more necessary. But this is only the beginning of the controversy and innovations.
“If AI automates prediction, that makes judgment and data really matter,” Lindsay said. “We have already automated many military actions with mechanized forces and precision weapons, then we have automated data collection with satellites and intelligence sensors, and now we are automating prediction with AI. So when are we going to automate judging, or are there judging components that can’t be automated?”
Until then, however, tactical and strategic decision-making by humans continues to be the most important aspect of warfare. (ANI)
(This is an unedited and auto-generated story from syndicated newsfeed, LatestLY staff may not have edited or edited the body of the content)