Chevron Left
Back to Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

Learner Reviews & Feedback for Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization by DeepLearning.AI

4.9
stars
63,404 ratings

About the Course

In the second course of the Deep Learning Specialization, you will open the deep learning black box to understand the processes that drive performance and generate good results systematically. By the end, you will learn the best practices to train and develop test sets and analyze bias/variance for building deep learning applications; be able to use standard neural network techniques such as initialization, L2 and dropout regularization, hyperparameter tuning, batch normalization, and gradient checking; implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence; and implement a neural network in TensorFlow. The Deep Learning Specialization is our foundational program that will help you understand the capabilities, challenges, and consequences of deep learning and prepare you to participate in the development of leading-edge AI technology. It provides a pathway for you to gain the knowledge and skills to apply machine learning to your work, level up your technical career, and take the definitive step in the world of AI....

Top reviews

XG

Oct 31, 2017

Thank you Andrew!! I know start to use Tensorflow, however, this tool is not well for a research goal. Maybe, pytorch could be considered in the future!! And let us know how to use pytorch in Windows.

DD

Mar 29, 2020

I have done two courses under Andrew ng and I am grateful to Coursera for their highly optimised and easily learning course structure. It has greatly help me gain confidence in this field. Thank you.

Filter by:

7051 - 7075 of 7,274 Reviews for Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

By Abdul-wahab M

Feb 9, 2025

is alright

By Mirta M R H

Jul 21, 2020

Muy bueno!

By Yashika S

Sep 10, 2019

tough one

By Mor K

Aug 30, 2019

excellent

By Luis E O

May 17, 2019

Excelente

By IURII B

Apr 3, 2018

Thank you

By MD. E k

Apr 30, 2020

was good

By Suman D

Jul 27, 2018

Awesome.

By Davit K

Jul 13, 2018

easy bb

By 刘倬瑞

Nov 2, 2017

helpful

By Suraj P

Jul 17, 2020

Great!

By SUMIT Y

Jul 4, 2020

NICE!!

By qiaohong

Oct 28, 2019

作业过于简单

By Sonia D

Jan 30, 2019

Useful

By DEEPOO M

Jul 18, 2020

great

By Johannes C

Aug 29, 2017

Good!

By Pallavi N

Jun 26, 2022

Nice

By Aditya S

Aug 9, 2019

good

By Łukasz Z

May 2, 2019

bugs

By Preethi A

Jul 3, 2018

good

By Dheeraj M P

Feb 23, 2018

good

By Darwin S

May 20, 2022

ok

By Alexandru I

Jan 31, 2022

ok

By Mohamed S

Oct 20, 2019

e

By Joshua P J

Jun 8, 2018

I've loved Andrew Ng's other courses, but this course was boring and not well-organized. The lectures were unfocused and they rambled a lot; they're nearly the opposite style of Prof. Ng's other material, which I found extremely well-organized. Most topics could be shortened 33-50% with no of clarity.

The course structure itself could use improvement:

The first part of Week 3 (Hyperparameter Tuning) belongs in Week 2.

The third part of Week 3 (Multi-Class Classification) should be its own week and its own assignment and could really be its own course. This is *THE* problem that almost every "applied" machine learning paper I've read is attempting to solve, whether by deep learning or some other class of algorithms. (Context and full disclosure: I'm a Ph.D. Geophysicist and my research is in seismology and volcanology.)

The introduction to TensorFlow needs to explain how objects and data structures work in TF. It really needs to explain the structure and syntax of the feed dictionary.

In the programming assignment for Week 3, there are three issues: (a) The correct use of feed_dict in 1.3 is completely new and cannot be guessed from the instructions or the TF website, and it's not clear why we use float32 for Y instead of int64; (b) In 1.4, "tf.one_hot(labels, depth, axis)" should be "tf.one_hot(labels, depth, axis=axis_number)". (c) In 2.1, the expected output for Y should have shape (6,?), not (10,?).