The Gaussian process (GP) is a probabilistic model for functions. Unique advantages of the GP are its generality, the fact training and prediction can be performed analytically, and its ability to represent uncertainty. In this talk, we first present the concept of a generative model to introduce the probabilistic perspective to Machine Learning, then, via an intuitive extension of basic generative models we introduce the GP model. We will also show how to train (or adjust) the GP model in the light of observed data, and how it can be used in real-world applications of denoising, prediction, reconstruction and deconvolution of general time series. The talk concludes proposing a novel generative model to address an inherent drawback of GP models: its inability to model non-Gaussian data.