I just did the developer course for **Google Machine Learning Crash Course**. An introductory course, where they give you the basic concepts and see examples of real implementations with TensorFlow. These examples are what have encouraged me to do so.

## Crash vs. Machine Learning Coursera

It is a much simpler course than that of Coursera Machine Learning and more practical. Let's say that the Coursera course focuses on you understanding how algorithms work mathematically while in Google's Crash those algorithms are almost like black boxes, they give you a little explanation and teach you to implement it with Tensor Flow.

And this is the big difference. The Google course, despite explaining in much less depth the different concepts and algorithms of Machine Learning, teaches us to apply them and start using TensorFlow and Keras.

All exercises are done with Google Colab, with which we already have the development environment prepared. It is a big difference with the Cursera course that you work with Matalab or Octave to implement the algorithms. But you do not see anything from Tensorflow or how to solve a real problem.

Quoting my comment in the review of that course

It is quite theoretical. But maybe that's why it seems like a good way to start because you are not only going to learn what to do but why you do it.

- When to choose one algorithm or another.

- How to choose and define the different parameters.

- What problems can arise with the algorithms and especially what measures to take.

You can do the Google Crash Machine Learning course even if you do not have a high level of mathematics, Andrew Ng's Coursera does not

## Agenda: What is seen in the course

First, you start with an explanation of what Machine Learning is, main concepts and types of problems. And with this, it is time to talk about the following points. Forgive that there is a lot of term in English, but the course is in English (although it is very easy to follow it) and many of the keys either do not have a translation, or when translated it loses sense, because in the context of Machine Learning everyone and in all the sites say them in English.

- Linear Regression or Linear Regression
- Squared loss: a popular loss function
- Gradient Down and Gradient Down Stochastic
- Learning rate or learning rate.
- Generalization
- overfitting
- validation set
- Feature crossing with the crossing one-hot vectors
- Nolinearialities
- Regularization (simplicity and sparcity) (L1 and L2)
- Logistic regression
- Classification
- Accuracy, precision and recall
- ROC Curve and AUC
- Neural networks (Training, One vs All, Softmax)
- embeddings

As I said, it works with Google Colab.

## For whom it is

If you are just starting out and want to learn to implement simple examples. It's a good way to get started.

There are 15 hours of course that you can do at your own pace, and although there are exercises you do not need to make deliveries or pass any tests.

The course is free.

## So now what?

As they are fast, I will surely look at the rest they have on Google.

In addition to continuing to test some of the courses that we have left in the list to see how they are and if I do something seriously that is already more advanced.

I have a serious project underway for the creation of a tool at work and what I need now is to start applying everything I have learned in this time and to fight with the real problems.

I will keep reporting my progress on the blog.