Foundations of Artificial Intelligence
A. Philosophy:
·
Can formal
rules be used to draw valid conclusions?
Yes, formal rules, like in math or logic, can help us make valid conclusions. Think of it like following a recipe: if you follow the steps correctly, you’ll get the right result. In logic, we use rules to make sure our conclusions are correct based on the information we start with.
·
How does the
mind come from a physical brain? Where does knowledge come from?
The mind is
connected to the brain, but it’s still unclear exactly how the brain creates
the mind. Some believe the mind is just the brain working in a certain way,
while others think the mind might be something more. Knowledge comes from our
experiences, like things we see, hear, or learn, and our ability to think about
those experiences.
·
How does
knowledge lead to action?
When we know
something, it helps us decide what to do. For example, if we know fire is
dangerous, we won’t touch it. Knowledge helps us take actions that are smart
and safe.
·
Aristotle’s
Syllogism:
Aristotle
was one of the first to create a clear system of rules for logical thinking. He
developed something called syllogisms, which are a way to reason step by
step to reach a conclusion.
For example:
·
All dogs are
animals (first statement)
·
All animals
have four legs (second
statement)
·
Therefore,
all dogs have four legs
(conclusion)
If the two
first statements are true, the conclusion must also be true. Aristotle believed
this kind of reasoning helped us understand the world.
·
Descartes
and Rationalism:
Descartes
believed that using our ability to reason (think logically) was very important
for understanding the world. This way of thinking is called rationalism.
Descartes thought that reason could help us figure out what’s true about the
world and ourselves.
B. Mathematics
·
What are the
formal rules to draw valid conclusions? What can be computed?
Formal rules
are like steps in a process that we follow to make sure our conclusions are
correct. For example, in logic, we follow rules to make sure our reasoning is
valid. In terms of computing, there are specific problems and functions that
computers can solve, but not everything can be computed.
·
How do we
reason with uncertain information?
When we
don't have all the facts or we’re not completely sure about something, we use
probability or estimates to reason. For example, if you don’t know the weather
exactly, you might say, "There’s a 70% chance of rain," based on what
you do know.
·
Formal
representation and proof algorithms: Propositional logic
In
mathematics and logic, propositional logic is a way of using symbols to
represent statements that can be true or false. We use rules to combine these
statements and prove things logically. Turing was a mathematician who wanted to
figure out which problems can be solved by computers, called computable
problems.
·
(Un)decidability
In math and
logic, decidability means whether a statement can be proven true or
false using a system of rules. Undecidable means there are some true
things that we can never prove within the system. For example, "A line can
be extended infinitely in both directions" is a true statement, but it’s
hard to fully prove it using just certain rules.
·
(In)tractability
When we talk
about tractability, we mean whether a problem can be solved in a
reasonable amount of time. If solving a problem takes a very long time,
especially when the size of the problem gets bigger, we call it intractable.
For example, if solving a problem takes longer and longer as it gets more
complex, it becomes really hard to solve.
·
Probability:
Predicting the future
Probability is a way of predicting the chances of
something happening in the future. For example, if you roll a fair die, you
know there’s a 1 in 6 chance of landing on any number. We use probability to
make educated guesses about things we don’t know for sure yet.
C. Economics:
·
How should
we make decisions to maximize payoff?
To make the
best decisions, we need to think about what will give us the greatest benefit
or reward. This is often called maximizing payoff—choosing the option
that will give us the most value, whether it’s money, happiness, or something
else important to us.
·
Economics
and how we make choices
Economics is all about understanding how people make
choices. We make decisions to get the best outcomes for ourselves, called utility
(which just means satisfaction or benefit). For example, if you have to choose
between two things, you choose the one that will give you the most
satisfaction.
·
Decision
theory
Decision
theory helps us make the best choices,
especially when we don’t know everything for sure. It combines two ideas:
·
Probability
theory: This is about understanding the
chances of different things happening.
·
Utility
theory: This helps us measure how much
satisfaction or benefit we get from each possible outcome.
By combining
these ideas, decision theory gives us a way to think about our choices when the
outcome is uncertain, helping us make smart decisions to get the best result.
D. Neuroscience
How do brains process
information?
Our brains process information by
receiving signals from our senses, thinking about them, and then making
decisions or actions based on that information. It's like the brain is a
super-powerful computer that helps us understand and react to the world around
us.
What is neuroscience?
·
Neuroscience is the study of the brain
and the nervous system (which includes the brain, spinal cord, and nerves).
Scientists in this field try to understand how our brain works and how it
controls everything we do, feel, and think.
· What
is the brain made of?
The brain is made up of billions
of tiny cells called neurons. There are around 100 billion neurons
in your brain, and each one is connected to others. These neurons work together
to send signals and process information.
·
Neurons as computational units
Neurons are like tiny computers
in your brain. They receive information, process it, and send it to other
neurons to help make decisions, store memories, and control actions. They work
in a very fast and coordinated way to help your brain function.
E. E. Psychology
- Behaviorism
The behaviorism movement, led by John Watson, focused on studying how animals and humans behave based on what they experience (the stimulus) and how they react to it (the response). Behaviorists only looked at what could be measured directly, like how a rat or pigeon reacts to certain things, but they didn’t focus much on things like thoughts or feelings. Behaviorism helped us learn a lot about animal behavior, but it didn’t do as well in explaining human behavior.
· Cognitive Psychology
Cognitive psychology looks
at the brain like a computer. It thinks that our brain processes information,
like how a computer runs programs. Psychologists like John Anderson (1980)
believed that we can understand how the brain works by figuring out how it
processes information—how it stores, remembers, and makes decisions, like a
computer running a detailed program. This way of thinking focuses on how mental
functions, like memory and problem-solving, are carried out in the brain.
F. F.Computer
engineering:
- ·
What do we need for artificial intelligence
to succeed?
For artificial intelligence
(AI) to work, we need two things: intelligence (the ability to think
or make decisions) and an artifact (something we create, like a
computer). The computer is the main artifact we use to build AI.
- · The first operational computer
The first operational computer
was called the Heath Robinson, and it was built in 1940 by a team led by
Alan Turing. It was made for a special purpose: to help decode secret
German messages during World War II.
- ·
The first programmable computer
The first operational
programmable computer was the Z-3, created in 1941 by Konrad Zuse
in Germany. This computer could be programmed to do different tasks, unlike
earlier ones that could only do one thing.
- ·
The first electronic computer
The first electronic computer
was called the ABC, and it was built by John Atanasoff and his
student Clifford Berry between 1940 and 1942 at Iowa State University.
This machine used electricity to work, which made it faster than earlier
mechanical computers.
- · The first programmable machine
Before computers, there was the first
programmable machine, which was a loom (a machine used to weave
cloth). It was created in 1805 by Joseph Marie Jacquard. The loom used punched
cards to store the pattern for weaving, and this idea of using cards to
store instructions became important for the development of computers.
G G. Control
theory and cybernetics
- · How
can artifacts operate under their own control?
A long time ago,
Ktesibios of Alexandria (around 250 B.C.) built the first self-controlling
machine: a water clock. This clock had a regulator that kept the water flowing
at a constant rate. This invention changed what people thought machines could
do because it showed that a machine could control itself to keep things steady.
- · Modern
control theory
In modern times,
control theory helps us design systems that can control themselves to achieve
certain goals. One part of control theory, called stochastic optimal control,
is about creating systems that do the best job they can over time. This idea is
similar to what we aim for in artificial intelligence (AI): designing systems
that can behave in the best way possible.
- · Tools
used in control theory
To make these
self-controlling systems work, scientists use tools like calculus (the study of
change) and matrix algebra (a way to organize numbers to solve problems). These
tools help us understand and design systems that can control themselves.
- · AI
and logical inference
While control theory
focuses on making systems that behave optimally (in the best way), AI
researchers use other tools like logical inference and computation to solve
different types of problems. These problems include things like understanding
language, seeing (vision), and planning ahead—things that control theory
doesn’t usually focus on.
H H .Linguistics
- · How does language relate to thought?
Language and thought are closely
connected, and how we learn and use language helps shape how we think. There
have been different ideas about how language works and how we learn it.
- · B.F. Skinner’s theory on language learning
In 1957, B.F. Skinner published a
book called Verbal Behaviour, where he explained the behaviorist approach to
learning language. He believed language learning is like any other behavior we
learn, through rewards and repetition.
- · Noam Chomsky's criticism
Noam Chomsky, another famous
thinker, disagreed with Skinner's theory. Chomsky published his own book called
Syntactic Structures, where he argued that Skinner’s theory didn’t explain the
creativity people show when they use language. For example, we can create and
understand sentences we’ve never heard before, which Skinner’s theory didn’t
fully explain.
- · Linguistics and AI
Modern linguistics (the study of
language) and artificial intelligence (AI) started around the same time and
grew together. They intersected in a new field called computational linguistics
or natural language processing (NLP), which focuses on teaching computers to
understand and use language.
- · Understanding language is more complex
Over time, it became clear that
understanding language is much more complicated than it seemed in 1957. To
understand a sentence, we don’t just need to know the rules of grammar. We also
need to understand the subject matter (what the sentence is about) and the
context (the situation or background in which the sentence is used).
- · Knowledge representation
In AI, knowledge representation
is about putting information into a form that computers can understand and use
to make decisions. This is closely tied to language and informed by linguistic
research, because understanding how language works is key to teaching computers
to think and reason.
0 Comments