Artificial intelligence (AI) is the attempt to develop artificial instructions similar to human thinking methods by analyzing them. From a certain perspective, although it may seem like an attempt to think of a programmed computer, these definitions are rapidly changing today, and new trends are emerging towards an artificial intelligence concept that can learn and develop independently of human intelligence. This trend was expressed in the works of Karel Čapek, one of the pioneering writers of modern science fiction literature, who influenced Isaac Asimov. Karel Čapek expressed this thought in his works, such as R.U.R, by dealing with common social problems of humanity with robots possessing artificial intelligence and predicted that artificial intelligence could develop independently of human intelligence in 1920.
Definition
According to an idealized approach, artificial intelligence is an artificial operating system expected to exhibit high cognitive functions or autonomous behaviors specific to human intelligence, such as perception, learning, associating plural concepts, thinking, reasoning, problem solving, communicating, making inferences, and decision making. This system should also be able to produce responses from its thoughts (actuator AI) and express these responses physically.
History
The history of the concept of "artificial intelligence" is as old as modern computer science. The originator, who raised the problem of "Can machines think?" and opened the debate on Machine Intelligence, is Alan Mathison Turing. During World War II, electromechanical devices produced for Cryptanalysis needs laid the foundations of computer science and the concept of artificial intelligence. Alan Turing was one of the most famous mathematicians trying to decipher the encryption algorithm of the Nazis' Enigma machine. The work started in England, Bletchley Park, to decode the encryption, which Turing's principles formed, and the computer prototypes he developed, such as the Heath Robinson, Bombe Computer, and Colossus Computers, led to the emergence of the concept of Machine Intelligence based on data processing logic based on Boolean algebra. However, later on, our modern computers became more widespread in areas of use aimed at solving everyday problems with expert systems, rather than focusing on artificial intelligence research by a narrower research community.
Today, the Turing Test, named after Alan Turing, is applied in the United States under the name Loebner Awards, and awards are given to software with machine intelligence by applying it to software that can have a conversation with a group of several strangers and a conversational system with machine intelligence. After a valid period, the subjects are asked questions to determine which subject is human and which is machine intelligence. Interestingly, in some of the tests conducted, while machine intelligence was thought to be human, real humans were thought to be machine intelligence.
One of the most famous examples of conversational systems that won the Loebner Prize is A.L.I.C.E., developed by Dr. Richard Wallace from Carnegie Mellon University. These and similar software have received criticism because the criteria measured by the test are based on conversation, so the programs are mainly conversational systems (chatbots).
Research on artificial intelligence is also carried out in Turkey. These studies are conducted within universities and independently in the fields of natural language processing, expert systems, and artificial neural networks. One of them is D.U.Y.G.U. - Language Space Artificial Realizer.
Development Process
Early research and artificial neural networks
One of the first studies on artificial intelligence according to the idealized definition was conducted by McCulloch and Pitts. The computational model proposed by these researchers, which used artificial neurons, was based on propositional logic, physiology, and Turing's computation theory. They showed that any computable function could be calculated with networks consisting of artificial neurons and that logical and AND and OR operations could be performed. They also suggested that these network structures could acquire learning ability if properly defined. When Hebb proposed a simple rule to change the strengths of connections between neurons, it also became possible to realize artificial neural networks that can learn.
In the 1950s, Shannon and Turing were writing chess programs for computers. The first computer based on artificial neural networks, SNARC, was made by Minsky and Edmonds at MIT in 1951. Continuing their studies at Princeton University, Mc Carthy, Minsky, Shannon, and Rochester organized a two-month open workshop at Dartmouth in 1956. Although many of the foundations of research were laid at this meeting, the most important feature of the meeting was the naming of Artificial Intelligence proposed by Mc Carthy. The first theorem-proving programs, Logic Theorist, introduced by Newell and Simon, were also introduced here.
New Approaches
Later, Newell and Simon developed the first program produced according to the approach of thinking like a human, called the General Problem Solver (GPS). Simon later proposed the physical symbol assumption, which became the starting point for those dealing with creating intelligent systems independent of humans. Simon's description of this made it clear that two different currents emerged in the approaches to Artificial Intelligence: Symbolic Artificial Intelligence and Cybernetic Artificial Intelligence.
Approaches and Criticisms
Symbolic artificial intelligence
After Simon's symbolic approach, logic-based studies became dominant, and some artificial problems and worlds were used to demonstrate the achievements of programs. Later, these problems were accused of being toy worlds that did not represent real life in any way, and it was argued that artificial intelligence could only be successful in these areas and could not be scaled to solve real-world problems.
One of the most famous programs of this period, developed by Weizenbaum, Eliza, seemed to be able to chat with the person in front of it, but it only performed some operations on the sentences of the person in front of it. During the initial machine translation studies, similar approaches were used, and when faced with very ridiculous translations, the support for these studies was stopped. These shortcomings actually arose from the fact that semantic processes in the human brain were not adequately examined.
Cybernetic artificial intelligence
The situation was the same on the Cybernetic front where studies involving Artificial Neural Networks were included. With the identification of some important deficiencies in the basic structures used in these studies to simulate intelligent behavior, many researchers stopped their studies. The most basic example of this is the publication of Minsky and Papert's Perceptrons in 1969, which showed that single-layer perceptrons would not solve some simple problems and that the same infertility would be expected in multilayer perceptrons.
The main reason for the failure of the Cybernetic movement is also the fact that the single-layer neural network achieved the task of the perception, but the perceptions or results about this task could not be turned into a judgment or connected with other concepts. This situation also led to the impossibility of simulating semantic processes.
Another important problem was that the size of the networks had to be increased disproportionately in order to increase the functionality of the system, and accordingly, the system became increasingly complex and difficult to manage. This situation led researchers to develop new learning models and structures in order to obtain more realistic and practical systems.
Expert Systems
In the 1970s, work was carried out to develop systems that could provide advice in a specific subject area. It is predicted that in the future, individuals who do not have expert knowledge in a particular field, for example, in medical diagnosis or in the field of finance, will be able to receive expert help using these systems. In 1974, the European Community supported a project called EUROPA, which aimed to simulate the decision-making mechanisms of expert economists. The resulting system could explain the causal relationships in economic models and give advice to decision-makers.
In the 1980s, XCON, a system developed by Digital Equipment Corporation, became the first expert system to be commercially used. XCON, which could produce information on the configuration of VAX computers, quickly paid for itself by eliminating the need for expert support.
Muhammed Niyazi ALPAY - Cryptograph
Senior Software Developer & Senior Linux System Administrator
Meraklı
PHP MySQL MongoDB Python Linux Cyber Security
There are none comment