Artificial intelligence has become an umbrella term for apps that perform complex tasks that in the past required human entries such as communicating with customers online or playing chess. This term is often used in exchange with its subdivisions, which include machine learning and deep learning.
However, there are differences.. For example, machine learning focuses on creating systems that learn or improve their performance based on the data they consume. It is important to note that although all means of ML are AI, not every AI is machine learning.
For the full value of AI, many companies make significant investments in data science teams. Data science, a multidisciplinary field that uses scientific and other methods to extract value from data, combines skills from areas such as statistics and computer science with scientific knowledge to analyze data collected from multiple sources.
What is Artificial intelligence?
In the simplest terms, the term AI refers to artificial intelligence or systems or devices that mimic human intelligence to perform tasks and that can improve themselves based on the information they collect. AI research focuses on the following components of intelligence:
1. Learning Artificial intelligence
There are many different forms of learning that apply to AI, but the easiest way to learn is through trial and error. For example, a simple computer program might try to solve an individual chess problem with random movements until it finds a discount. Thus, the software can store the solution with the site so that the next time the computer encounters the same site, the solution is restored.
This simple preservation of individual elements and processes – called memory and relatively easy to execute on the toughest computer and solve the problem of achieving so-called generalization. The circular includes the application of past experiences to similar new cases. For example, the program cannot remember the pre-act tension or generate the previous wording of a word such as a jump unless it has jumped before, while a program that can generalize can learn “added” rules to shape the past based on something like a jump of tension (act experience).
2. Reasoning In Artificial intelligence
Reasoning means drawing conclusions that fit the situation.
So ai software and algorithms have tremendous storage capacity, much greater than human memory, and can retrieve and process information at lightning speed, which humans cannot. This is the first type of intelligence, and it’s really very simple compared to the second type of intelligence, which is the ability to analyze and think by analogy.
Artificial intelligence can also produce solutions through abstract thinking after intensive training, and only in the areas where it is trained, but real intelligence is the ability to generalize (ability to generalize), something that AI does not have and suffers greatly with. “
This is a remarkable achievement in programming computers for reasoning, especially deductive reasoning. However, real thinking goes beyond just drawing conclusions. It is about drawing conclusions related to a solution to a given task or situation. This is one of the most difficult problems facing AI.
So AI software and algorithms have enormous storage capacity, much larger than human memory, and can retrieve and process information at lightning speed, which humans cannot do. This is the first type of intelligence, and it’s really very simple compared to the second type of intelligence, which is the ability to analyze and think by measurement.
Artificial intelligence can also produce solutions through abstract thinking after intensive training, and only in the fields in which it is trained, this is a great achievement in programming computers for thinking, especially conclusive logic.
However, real thinking goes beyond simply drawing conclusions. It is a matter of drawing conclusions on a particular task or situation solution. This is one of the most difficult problems facing AI.
3. Artificial intelligence and Problem solving
There are major concerns for most people about AI, what this term can mean and its impact on jobs, and whether robots will replace humans in the labour market.
Despite these concerns, current research projects show that AI can be used for the common good as well.
In this context, robohub published a report with global problems that AI might help solve:
One of the biggest benefits of artificial intelligence is its ability to search for a large amount of data in record time, to help researchers identify areas on which their research focuses.
For example, new prospects for ALS were discovered through a partnership between the Institute of Neurosurgery and IBM Watson Health, which reviewed thousands of research and identified new genes associated with the disease.
Health care artificial intelligence is also expected to be able to predict drug treatment outcomes. For example, all cancer patients are given the same drug, and are followed up to monitor the effectiveness of the drug, while AI can use data to predict the effectiveness of a specific drug with specific patients, and provide personalized treatment methods, saving time and money.
Make driving safer
Despite accidents from self-driving cars that made headlines this year, this area of artificial intelligence has significantly reduced road deaths and injuries.
Change how to learn
According to a report by Stanford University, self-driving cars will not reduce traffic-related deaths and injuries, but can also bring about lifestyle changes, there will be more time for work and leisure while riding, and will identify housing options, as the presence and comfort of self-driving cars will affect people’s choice of place of residence.
People learn in a different way and at different speeds. In the future, AI can be used to teach individuals in a personalized way that is tailored to each of them according to their level of reception. There is no educational system in the world that can provide a teacher to every child, but through AI an AI can provide as human-like an automated teacher as possible, to provide a personal learning experience.
How much energy is consumed
Artificial intelligence can help people know about energy consumption, and that’s already starting to happen, Google and other tech giants have huge data centers that require a huge amount of energy to power servers and keep them cool.
AI’s ability to analyze a large amount of data can represent a shift in wildlife conservation.
For example, by monitoring animals’ movements, they can see where they go, which natural citizens should be available to protect them, and the study used computing to discover where best wildlife corridors can be built for the life of wolverines and grey bears in Montana.
4. Perception in Artificial intelligence
Because artificial intelligence is computer systems that mimic humans in their actions This does not mean that any software operates through a particular algorithm and carries out specific tasks that are considered artificial intelligence, In order to call this term a machine, it must be able to learn. data collection, analysis and decisions based on this analysis independently that mimic people’s way of thinking, which means three main characteristics:
- Ability to learn, i.e. acquire information and develop rules for using such information.
- The possibility of collecting and analysing such data and information and creating relationships between them.
- Making decisions based on the information analysis process, not just an algorithm that achieves a particular goal.
5. Artificial intelligence programming language
The idea of artificial intelligence is based on the simulation of human beings, their participation in their functions and work, such as linking information to each other in order to understand and solve the problems they face.
Much seeks to know this sophisticated technology of artificial intelligence, and to know the basics of learning programming we must first study and learn their diverse languages, this is the first phase of man’s handling of artificial intelligence.
The types of programming languages are multiple, with AI programming types divided into six: (Python programming language – R programming language – Lisp programming language – prolog programming language – php language).
Methods and goals in Artificial intelligence
Symbols and the Connectionist Approach
AI research follows two distinct and sometimes competitive approaches, namely symbolic and correlation approaches. The symbolic approach seeks to replicate intelligence through cognitive analysis and hence the symbolic term independent of the brain’s biological structure in terms of symbolic processing. On the other hand, methods of the association approach include the creation of synthetic neural networks that mimic the structure of the brain, hence the naming of communication labels.
In 1957, two senior supporters of symbolic artificial intelligence – Alan Newell, a researcher at Rand in Santa Monica, California, and Herbert, a psychologist and computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, Simon – solidified the association approach. Assumption called physical code. The hypothesis states that addressing symbolic structures is sufficient in principle to produce artificial intelligence in digital computers, and that human intelligence is the result of homogeneous symbolic manipulation.
Symbolic technologies work in the field of simplification, but often fail when confronted with the real world. At the same time, bottom-to-top researchers were unable to replicate nervous systems even for the most basic organisms. Caenorhabditis elegans, a widely studied worm, contains about 300 neurons whose communication patterns are known. However, the communication model fails to mimic even this worm. Neurons for binding theory are clearly serious simplifications of reality.
Strong AI, applied AI, and cognitive simulation
Through the above methods, AI research tries to achieve one of three goals: powerful artificial intelligence, applied AI, or cognitive simulation.
Powerful artificial intelligence aims to make thinking machines. (In 1980, the philosopher John Searle of the University of California, Berkeley, introduced the term “powerful artificial intelligence” to this research category.)
The ultimate goal of strong artificial intelligence is to produce a machine whose general intellectual abilities cannot be distinguished from humans. As detailed in the Department of Early Features of Artificial Intelligence, this goal generated considerable interest in the 1950s and 1960s, but this optimism gave way to understanding the severe difficulties involved. Until that moment, progress was limited.
Applied artificial intelligence, also known as advanced information processing, aims to produce commercially viable “smart” systems – for example, “expert” medical diagnostic systems and stock trading systems.
Cognitive simulations use computers to test theories about how the human mind works – for example, about how to recognize faces or save memories. Cognitive simulation is already a powerful tool in neuroscience and cognitive psychology.
Alan Turing and the beginning of Artificial intelligence
During World War II, Turing was a senior cryptanalyst at the Government Cryptography and Cryptography School in Bletchley Park, Buckinghamshire, England. It was not until the end of the war in Europe in 1945 that Turing turned to the project of building an electronic calculator using stored programs. During the war, however, he focused on machine intelligence.
The earliest significant work in artificial intelligence was done in the mid-20th century by British logician and computer pioneer Alan Mathison Turing. In 1935, Turing described an abstract computer consisting of an infinite memory and a scanner that moved symbol by symbol in the memory, reading what it found and writing more symbols. The actions of the scanner are instructed by command programs, which are also stored in memory in the form of symbols. This is Turing’s concept of a stored program, which implies the possibility of a machine working with its own program, thereby modifying or improving it. Turing’s concept is now simply called the Universal Turing Machine. All modern computers are essentially universal Turing machines.
At Bletchley Park, Turing articulated his ideas about machine intelligence with chess a useful source of challenging and well-defined problems against which proposed problem-solving approaches can be tested. In principle, a chess-playing computer could do so by exhaustively searching all available moves, but in practice this is impossible because an astronomically large number of moves must be checked. Heuristics are necessary for narrower, more granular searches. Although Turing tried to design chess programs, he had to settle for theory without a computer running his chess program. The first true artificial intelligence programs had to wait for the arrival of electronic digital computers with stored programs.
The Turing test
In 1950, Turing avoided the traditional debate over the definition of intelligence and offered a practical test of computer intelligence, which is now simply called the Turing test. The Turing test included three participants: a computer, a human investigator, and a human chip. The investigator tried to identify the computer by questioning the other participants.
All communication via keyboard and screen.
Investigators can ask questions as continuous and thorough as they like, and computers can do anything to force a false identity. (For example, a computer may reply “No” to “Are you a computer?” And then when he is asked to multiply one large number by another, there will be long breaks and wrong answers).
The slides should help request correct identification. Many different people play investigators and chips, and if a sufficient proportion of investigators cannot distinguish between computer and human, a computer (according to supporters of the Turing test) is an intelligent entity.
Early milestones in artificial intelligence
The first artificial intelligence program
The earliest successful AI program was written in 1951 by Christopher Strachey, who later became director of the Programming Research Group at Oxford University. Strachey’s draft program runs on the Ferranti Mark I computer at the University of Manchester, UK. By the summer of 1952, the program could play an entire draft game at a reasonable rate.
Information about the earliest successful demonstration of machine learning was published in 1952. Shopper was written by Anthony Oettinger of the University of Cambridge and runs on an EDSAC computer. The Sim World of Shopper is a shopping mall with eight stores. When instructed to buy an item, shoppers search for it and randomly visit the store until the item is found.
During the search process, the shopper memorized a few items in stock for each visited store (just like a human shopper). The next time Shopper is sent to buy the same item or a different item he’s already found, he goes straight to the right store. This simple form of learning, what is intelligence in the introductory part? display, called memory.
The first AI program to run in the United States was also a checkers program written by Arthur Samuel in 1952 for the IBM 701 prototype. Samuel took a significant portion of Strachey’s draft plan and expanded it considerably over a few years. In 1955, he added features that allow programs to learn from experience. Samuel added mechanisms for memorization and generalization, and these improvements eventually led to his program winning a competition with the former Connecticut drawing master in 1962.
Samuel’s checkers program was also one of the first attempts at evolutionary computing. (His program was “evolved” by “evolving” a modified copy with the current best version of his program, and the winner became the new standard.) Evolutionary computation typically involves generating and evaluating successive “generations” of programs using automated methods until one emerges Highly competent solution.
John Holland, a major proponent of evolutionary computing, also wrote test software for the IBM 701 prototype computer. In particular, he helped design a “virtual” neural network mouse that could be trained to navigate a maze. This work convinced Holland of the effectiveness of the bottom-up approach. In 1952, while continuing to serve as an IBM consultant, Holland transferred to the University of Michigan to pursue a Ph.D. in mathematics.
However, he soon turned to a new interdisciplinary project in computer and information processing (later known as Communication Sciences) created by Arthur Burks, one of the founders of ENIAC and its successor, EDVAC. In his 1959 dissertation, possibly the world’s first PhD in computer science, Holland proposed a new type of computer-that assigned each artificial neuron in a network to a separate processor. (In 1985, Daniel Hillis solved the technical difficulties and built the first computer of its kind, a 65,536-processor supercomputer from Thinking Machines Corporation).
Holland joined the University of Michigan after graduation, and over the next four years he led much of the research into automated methods for evolutionary computation, a process now known as genetic algorithms.
Systems implemented in Holland’s lab include a chess program, a model of a single-celled biological organism, and a classification system for controlling a simulated natural gas pipeline network. However, genetic algorithms are no longer limited to “academic” presentations; in one important practical application, genetic algorithms work with crime witnesses to create portraits of perpetrators.
Logical reasoning and problem solving
Logical reasoning ability is an important aspect of intelligence and has always been the focus of artificial intelligence research. An important milestone in the field was the theorem proving program written in 1955/56 by Allen Newell and J. Clifford Shaw of the RAND Corporation and Herbert Simon of Carnegie Mellon University.
Logic Theorist, as the program is known, aims to prove theorems in Principia Mathematica (1910-13), a three-volume work co-authored by British philosophers and mathematicians Alfred North Whitehead and Bertrand Russell. In one case, the proof of program development was more elegant than the proof in the book.
Newell, Simon, and Shaw wrote a more powerful program called the General Problem Solver, or GPS. The first version of GPS ran in 1957, and work on the project continued for about a decade. GPS can solve an impressive variety of mysteries through trial and error. However, one criticism of GPS and similar programs that lack any ability to learn is that the program’s intelligence is entirely second-hand and comes from information explicitly included by the programmer.