All essential concepts and key figures from the AI curriculum
AI is an area of computer science that aims to make machines do intelligent things, that is, learn and solve problems, similar to the natural intelligence of humans and animals.
In AI, an intelligent agent receives information from the environment, performs computations to decide what action to take to achieve the goal, and takes actions autonomously.
AI can improve its performance with learning.
ML is a set of mathematical algorithms that can automatically analyze data and make decisions or predict results for given data.
The term "Machine Learning" was coined by Arthur Samuel in 1959 at IBM.
Narrow AI (Weak AI): Used to solve specific problems. Almost all AI applications today are narrow AI.
General AI (Strong AI): Able to solve general problems, similar to humans in learning, thinking, and inventing.
Super AI (Superintelligence): AI after the singularity point when it surpasses human intelligence.
Supervised Learning: Models trained with labeled data. Divided into:
Unsupervised Learning: Models fed with unlabeled data. Divided into:
Semi-supervised Learning: Uses both labeled and unlabeled data.
Reinforcement Learning: Learns through trial and error to find actions that yield maximum cumulative reward.
Proposed by Alan Turing in October 1950 in paper "Computing Machinery and Intelligence".
Three-person game called imitation game where interrogator tries to determine which player is computer and which is human.
Interrogator limited to using responses to written questions.
To date, no computer has passed the Turing test.
Ray Kurzweil predicts AI will pass Turing test in 2029.
Developed in 1950s-1970s based on human brains mimicking biological neural networks.
Usually has three layers:
Requires training with large amount of data.
After training, can predict results for unseen data.
Developed from 2010s-present.
Special type of neural network with more than one hidden layer.
Made possible with increased computing power (GPUs) and improved algorithms.
DL is a subset of ML.
Outperforms many other algorithms on large datasets.
Best-known supervised learning algorithm.
Used for both classification and regression problems.
Developed at AT&T Bell Laboratories by Vladimir Naumovich Vapnik in 1990s.
One of the most robust prediction methods.
Advantages:
Disadvantages:
Supervised learning algorithm based on Bayes' Theorem, used mainly for classification.
Called "naive" because assumes all features are independent - rarely true but works surprisingly well.
Types of Naive Bayes:
Advantages: Simple and fast, works well with high-dimensional data, requires small training data.
Disadvantages: Independence assumption unrealistic, poor performance with strongly correlated features.
Nonparametric, supervised learning method for classification and regression.
Represents a set of if-then rules learned from data.
Predicts by following path from root node to leaf.
Deeper trees capture more detail but risk overfitting.
Terminology:
Ensemble learning algorithm using multiple decision trees.
Works for both classification and regression.
Improves performance by combining many trees.
Reduces overfitting compared to single tree.
How it works:
Supervised learning algorithm for classification and regression.
Classification based on majority vote among K nearest neighbors.
How it works:
Key Notes:
Linear Discriminant Analysis (LDA):
Principal Component Analysis (PCA):
Both commonly used for dimension reduction.
Alan Turing: British mathematician, developed bombe code-breaking machine, proposed Turing test.
Ray Kurzweil: Author, inventor, futurist, predicted AI passing Turing test in 2029 and singularity in 2045.
Stephen Hawking: Theoretical physicist, warned about AI risks.
Gray Scott: American futurist, quote about AI surpassing humans by 2035.
Steve Polyak: American neurologist, quote about natural stupidity vs AI.
Arthur Samuel: Coined term "Machine Learning" at IBM in 1959.
Tom Mitchell: Provided famous ML definition in 1997.
Vladimir Vapnik: Developed SVM at AT&T Bell Labs.
Demis Hassabis: Founded DeepMind (AlphaGo, AlphaFold).
Elon Musk: Founded OpenAI and Neuralink.
Simon Knowles & Nigel Toon: Founded Graphcore (IPU).
Robert Griesemer, Rob Pike, Ken Thompson: Developed Go language at Google.
Yangqing Jia: Developed Caffe framework at UC Berkeley.
Accuracy: Percentage of correct predictions out of all predictions.
Precision: Of all items predicted as positive, how many were actually positive?
Recall: Of all actual positive cases, how many did the model catch?
F1 Score: Harmonic mean of Precision and Recall. Balances false positives and false negatives.
AUC-ROC: Measures model's ability to separate positive and negative classes.
Cloud AI:
Edge AI:
GPUs (Graphics Processing Units):
FPGAs (Field Programmable Gate Arrays):
IPUs (Intelligence Processing Units):
Programming Languages:
Key Frameworks: