English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 17h 7m | 2.47 GB
The bestselling book on Python deep learning, now covering generative AI, Keras 3, PyTorch, and JAX!
Deep Learning with Python, Third Edition puts the power of deep learning in your hands. This new edition includes the latest Keras and TensorFlow features, generative AI models, and added coverage of PyTorch and JAX. Learn directly from the creator of Keras and step confidently into the world of deep learning with Python.
In Deep Learning with Python, Third Edition you’ll discover:
- Deep learning from first principles
- The latest features of Keras 3
- A primer on JAX, PyTorch, and TensorFlow
- Image classification and image segmentation
- Time series forecasting
- Large Language models
- Text classification and machine translation
- Text and image generation—build your own GPT and diffusion models!
- Scaling and tuning models
With over 100,000 copies sold, Deep Learning with Python makes it possible for developers, data scientists, and machine learning enthusiasts to put deep learning into action. In this expanded and updated third edition, Keras creator François Chollet offers insights for both novice and experienced machine learning practitioners. You’ll master state-of-the-art deep learning tools and techniques, from the latest features of Keras 3 to building AI models that can generate text and images.
In less than a decade, deep learning has changed the world—twice. First, Python-based libraries like Keras, TensorFlow, and PyTorch elevated neural networks from lab experiments to high-performance production systems deployed at scale. And now, through Large Language Models and other generative AI tools, deep learning is again transforming business and society. In this new edition, Keras creator François Chollet invites you into this amazing subject in the fluid, mentoring style of a true insider.
Deep Learning with Python, Third Edition makes the concepts behind deep learning and generative AI understandable and approachable. This complete rewrite of the bestselling original includes fresh chapters on transformers, building your own GPT-like LLM, and generating images with diffusion models. Each chapter introduces practical projects and code examples that build your understanding of deep learning, layer by layer.
What’s Inside
- Hands-on, code-first learning
- Comprehensive, from basics to generative AI
- Intuitive and easy math explanations
- Examples in Keras, PyTorch, JAX, and TensorFlow
Table of Contents
1 Chapter 1. What is deep learning
2 Chapter 1. Artificial intelligence
3 Chapter 1. Machine learning
4 Chapter 1. Learning rules and representations from data
5 Chapter 1. The deep in deep learning
6 Chapter 1. Understanding how deep learning works, in three figures
7 Chapter 1. Understanding how deep learning works, in three figures
8 Chapter 1. The age of generative AI
9 Chapter 1. What deep learning has achieved so far
10 Chapter 1. Beware of the short-term hype
11 Chapter 1. Summer can turn to winter
12 Chapter 1. The promise of AI
13 Chapter 2. The mathematical building blocks of neural networks
14 Chapter 2. Data representations for neural networks
15 Chapter 2. The gears of neural networks – Tensor operations
16 Chapter 2. The engine of neural networks – Gradient-based optimization
17 Chapter 2. Looking back at our first example
18 Chapter 2. Summary
19 Chapter 3. Introduction to TensorFlow, PyTorch, JAX, and Keras
20 Chapter 3. How these frameworks relate to each other
21 Chapter 3. Introduction to TensorFlow
22 Chapter 3. Introduction to PyTorch
23 Chapter 3. Introduction to JAX
24 Chapter 3. Introduction to Keras
25 Chapter 3. Summary
26 Chapter 4. Classification and regression
27 Chapter 4. Classifying newswires – A multiclass classification example
28 Chapter 4. Predicting house prices – A regression example
29 Chapter 4. Summary
30 Chapter 5. Fundamentals of machine learning
31 Chapter 5. Evaluating machine-learning models
32 Chapter 5. Improving model fit
33 Chapter 5. Improving generalization
34 Chapter 5. Summary
35 Chapter 6. The universal workflow of machine learning
36 Chapter 6. Developing a model
37 Chapter 6. Deploying your model
38 Chapter 6. Summary
39 Chapter 7. A deep dive on Keras
40 Chapter 7. Different ways to build Keras models
41 Chapter 7. Using built-in training and evaluation loops
42 Chapter 7. Writing your own training and evaluation loops
43 Chapter 7. Summary
44 Chapter 8. Image classification
45 Chapter 8. Training a ConvNet from scratch on a small dataset
46 Chapter 8. Using a pretrained model
47 Chapter 8. Summary
48 Chapter 9. ConvNet architecture patterns
49 Chapter 9. Residual connections
50 Chapter 9. Batch normalization
51 Chapter 9. Depthwise separable convolutions
52 Chapter 9. Putting it together – A mini Xception-like model
53 Chapter 9. Beyond convolution – Vision Transformers
54 Chapter 9. Summary
55 Chapter 10. Interpreting what ConvNets learn
56 Chapter 10. Visualizing ConvNet filters
57 Chapter 10. Visualizing heatmaps of class activation
58 Chapter 10. Visualizing the latent space of a ConvNet
59 Chapter 10. Summary
60 Chapter 11. Image segmentation
61 Chapter 11. Training a segmentation model from scratch
62 Chapter 11. Using a pretrained segmentation model
63 Chapter 11. Summary
64 Chapter 12. Object detection
65 Chapter 12. Training a YOLO model from scratch
66 Chapter 12. Using a pretrained RetinaNet detector
67 Chapter 12. Summary
68 Chapter 13. Timeseries forecasting
69 Chapter 13. A temperature forecasting example
70 Chapter 13. Recurrent neural networks
71 Chapter 13. Going even further
72 Chapter 13. Summary
73 Chapter 14. Text classification
74 Chapter 14. Preparing text data
75 Chapter 14. Sets vs. sequences
76 Chapter 14. Set models
77 Chapter 14. Sequence models
78 Chapter 14. Summary
79 Chapter 15. Language models and the Transformer
80 Chapter 15. Sequence-to-sequence learning
81 Chapter 15. The Transformer architecture
82 Chapter 15. Classification with a pretrained Transformer
83 Chapter 15. What makes the Transformer effective
84 Chapter 15. Summary
85 Chapter 16. Text generation
86 Chapter 16. Training a mini-GPT
87 Chapter 16. Using a pretrained LLM
88 Chapter 16. Going further with LLMs
89 Chapter 16. Where are LLMs heading next
90 Chapter 16. Summary
91 Chapter 17. Image generation
92 Chapter 17. Diffusion models
93 Chapter 17. Text-to-image models
94 Chapter 17. Summary
95 Chapter 18. Best practices for the real world
96 Chapter 18. Scaling up model training with multiple devices
97 Chapter 18. Speeding up training and inference with lower-precision computation
98 Chapter 18. Summary
99 Chapter 19. The future of AI
100 Chapter 19. Scale isn t all you need
101 Chapter 19. How to build intelligence
102 Chapter 19. The missing ingredients – Search and symbols
103 Chapter 20. Conclusions
104 Chapter 20. Limitations of deep learning
105 Chapter 20. What might lie ahead
106 Chapter 20. Staying up to date in a fast-moving field
107 Chapter 20. Final words
Resolve the captcha to access the links!
