English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 151 Lessons (23h 6m) | 3.48 GB
#BreakIntoAI with Machine Learning Specialization. Master fundamental AI concepts and develop practical machine learning skills in the beginner-friendly, 3-course program by AI visionary Andrew Ng
WHAT YOU WILL LEARN
Build ML models with NumPy & scikit-learn, build & train supervised models for prediction & binary classification tasks (linear, logistic regression)
Build & train a neural network with TensorFlow to perform multi-class classification, & build & use decision trees & tree ensemble methods
Apply best practices for ML development & use unsupervised learning techniques for unsupervised learning including clustering & anomaly detection
Build recommender systems with a collaborative filtering approach & a content-based deep learning method & build a deep reinforcement learning model
The Machine Learning Specialization is a foundational online program created in collaboration between DeepLearning.AI and Stanford Online. This beginner-friendly program will teach you the fundamentals of machine learning and how to use these techniques to build real-world AI applications.
This Specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field.
It provides a broad introduction to modern machine learning, including supervised learning (multiple linear regression, logistic regression, neural networks, and decision trees), unsupervised learning (clustering, dimensionality reduction, recommender systems), and some of the best practices used in Silicon Valley for artificial intelligence and machine learning innovation (evaluating and tuning models, taking a data-centric approach to improving performance, and more.)
By the end of this Specialization, you will have mastered key concepts and gained the practical know-how to quickly and powerfully apply machine learning to challenging real-world problems. If you’re looking to break into AI or build a career in machine learning, the new Machine Learning Specialization is the best place to start.
By the end of this Specialization, you will be ready to
- Build machine learning models in Python using popular machine learning libraries NumPy and scikit-learn.
- Build and train supervised machine learning models for prediction and binary classification tasks, including linear regression and logistic regression.
- Build and train a neural network with TensorFlow to perform multi-class classification.
- Apply best practices for machine learning development so that your models generalize to data and tasks in the real world.
- Build and use decision trees and tree ensemble methods, including random forests and boosted trees.
- Use unsupervised learning techniques for unsupervised learning: including clustering and anomaly detection.
- Build recommender systems with a collaborative filtering approach and a content-based deep learning method.
- Build a deep reinforcement learning model.
Table of Contents
advanced-learning-algorithms
neural-networks
neural-networks-intuition
1 welcome
2 neurons-and-the-brain
3 demand-prediction
4 example-recognizing-images
5 join-the-deeplearning-ai-forum-to-ask-questions-get-support-or-share-amazing_instructions
neural-network-model
6 neural-network-layer
7 more-complex-neural-networks
8 inference-making-predictions-forward-propagation
tensorflow-implementation
9 inference-in-code
10 data-in-tensorflow
11 building-a-neural-network
neural-network-implementation-in-python
12 forward-prop-in-a-single-layer
13 general-implementation-of-forward-propagation
speculations-on-artificial-general-intelligence-agi
14 is-there-a-path-to-agi
vectorization-optional
15 how-neural-networks-are-implemented-efficiently
16 matrix-multiplication
17 matrix-multiplication-rules
18 matrix-multiplication-code
neural-network-training
neural-network-training
19 tensorflow-implementation
20 training-details
activation-functions
21 alternatives-to-the-sigmoid-activation
22 choosing-activation-functions
23 why-do-we-need-activation-functions
multiclass-classification
24 multiclass
25 softmax
26 neural-network-with-softmax-output
27 improved-implementation-of-softmax
28 classification-with-multiple-outputs-optional
additional-neural-network-concepts
29 advanced-optimization
30 additional-layer-types
back-propagation-optional
31 what-is-a-derivative-optional
32 computation-graph-optional
33 larger-neural-network-example-optional
advice-for-applying-machine-learning
advice-for-applying-machine-learning
34 deciding-what-to-try-next
35 evaluating-a-model
36 model-selection-and-training-cross-validation-test-sets
bias-and-variance
37 diagnosing-bias-and-variance
38 regularization-and-bias-variance
39 establishing-a-baseline-level-of-performance
40 learning-curves
41 deciding-what-to-try-next-revisited
42 bias-variance-and-neural-networks
machine-learning-development-process
43 iterative-loop-of-ml-development
44 error-analysis
45 adding-data
46 transfer-learning-using-data-from-a-different-task
47 full-cycle-of-a-machine-learning-project
48 fairness-bias-and-ethics
skewed-datasets-optional
49 error-metrics-for-skewed-datasets
50 trading-off-precision-and-recall
decision-trees
decision-trees
51 decision-tree-model
52 learning-process
decision-tree-learning
53 measuring-purity
54 choosing-a-split-information-gain
55 putting-it-together
56 using-one-hot-encoding-of-categorical-features
57 continuous-valued-features
58 regression-trees-optional
tree-ensembles
59 using-multiple-decision-trees
60 sampling-with-replacement
61 random-forest-algorithm
62 xgboost
63 when-to-use-decision-trees
end-of-access-to-lab-notebooks
64 important-reminder-about-end-of-access-to-lab-notebooks_instructions
conversations-with-andrew-optional
65 andrew-ng-and-chris-manning-on-natural-language-processing
acknowledgments
66 acknowledgements_instructions
machine-learning
week-1-introduction-to-machine-learning
overview-of-machine-learning
67 welcome-to-machine-learning
68 applications-of-machine-learning
69 join-the-deeplearning-ai-forum-to-ask-questions-get-support-or-share-amazing_instructions
supervised-vs-unsupervised-machine-learning
70 what-is-machine-learning
71 supervised-learning-part-1
72 supervised-learning-part-2
73 unsupervised-learning-part-1
74 unsupervised-learning-part-2
75 jupyter-notebooks
regression-model
76 linear-regression-model-part-1
77 linear-regression-model-part-2
78 cost-function-formula
79 cost-function-intuition
80 visualizing-the-cost-function
81 visualization-examples
train-the-model-with-gradient-descent
82 gradient-descent
83 implementing-gradient-descent
84 gradient-descent-intuition
85 learning-rate
86 gradient-descent-for-linear-regression
87 running-gradient-descent
week-2-regression-with-multiple-input-variables
multiple-linear-regression
88 multiple-features
89 vectorization-part-1
90 vectorization-part-2
91 gradient-descent-for-multiple-linear-regression
gradient-descent-in-practice
92 feature-scaling-part-1
93 feature-scaling-part-2
94 checking-gradient-descent-for-convergence
95 choosing-the-learning-rate
96 feature-engineering
97 polynomial-regression
week-3-classification
classification-with-logistic-regression
98 motivations
99 logistic-regression
100 decision-boundary
cost-function-for-logistic-regression
101 cost-function-for-logistic-regression
102 simplified-cost-function-for-logistic-regression
gradient-descent-for-logistic-regression
103 gradient-descent-implementation
the-problem-of-overfitting
104 the-problem-of-overfitting
105 addressing-overfitting
106 cost-function-with-regularization
107 regularized-linear-regression
108 regularized-logistic-regression
end-of-access-to-lab-notebooks
109 important-reminder-about-end-of-access-to-lab-notebooks_instructions
conversations-with-andrew-optional
110 andrew-ng-and-fei-fei-li-on-human-centered-ai
acknowledgments
111 acknowledgments_instructions
unsupervised-learning-recommenders-reinforcement-learning
unsupervised-learning
welcome-to-the-course
112 welcome
113 join-the-deeplearning-ai-forum-to-ask-questions-get-support-or-share-amazing_instructions
clustering
114 what-is-clustering
115 k-means-intuition
116 k-means-algorithm
117 optimization-objective
118 initializing-k-means
119 choosing-the-number-of-clusters
anomaly-detection
120 finding-unusual-events
121 gaussian-normal-distribution
122 anomaly-detection-algorithm
123 developing-and-evaluating-an-anomaly-detection-system
124 anomaly-detection-vs-supervised-learning
125 choosing-what-features-to-use
recommender-systems
collaborative-filtering
126 making-recommendations
127 using-per-item-features
128 collaborative-filtering-algorithm
129 binary-labels-favs-likes-and-clicks
recommender-systems-implementation-detail
130 mean-normalization
131 tensorflow-implementation-of-collaborative-filtering
132 finding-related-items
content-based-filtering
133 collaborative-filtering-vs-content-based-filtering
134 deep-learning-for-content-based-filtering
135 recommending-from-a-large-catalogue
136 ethical-use-of-recommender-systems
137 tensorflow-implementation-of-content-based-filtering
principal-component-analysis
138 reducing-the-number-of-features-optional
139 pca-algorithm-optional
140 pca-in-code-optional
reinforcement-learning
reinforcement-learning-introduction
141 what-is-reinforcement-learning
142 mars-rover-example
143 the-return-in-reinforcement-learning
144 making-decisions-policies-in-reinforcement-learning
145 review-of-key-concepts
state-action-value-function
146 state-action-value-function-definition
147 state-action-value-function-example
148 bellman-equation
149 random-stochastic-environment-optional
continuous-state-spaces
150 example-of-continuous-state-space-applications
151 lunar-lander
152 learning-the-state-value-function
153 algorithm-refinement-improved-neural-network-architecture
154 algorithm-refinement-greedy-policy
155 algorithm-refinement-mini-batch-and-soft-updates-optional
156 the-state-of-reinforcement-learning
end-of-access-to-lab-notebooks
157 important-reminder-about-end-of-access-to-lab-notebooks_instructions
summary-and-thank-you
158 summary-and-thank-you
conversations-with-andrew-optional
159 andrew-ng-and-chelsea-finn-on-ai-and-robotics
acknowledgments
160 acknowledgments_instructions
161 optional-opportunity-to-mentor-other-learners_instructions
Resolve the captcha to access the links!
