# MLOps, The What, Why and How.

As companies are becoming more and more data-driven with more focus on building machine learning teams and creating models, it’s important to understand the challenges that ML projects present in order to properly address them with good standardized practices. The process of building a machine learning solution is complex and involves many steps that you may be familiar with: Collecting and processing raw data, Analysing the data, Processing the data for training, constructing, train, and test the model, detect bias and analyze performance to validate the model, and finally deploy and monitor the model. It’s clear how work-extensive the entire process can get when you need to repeat it multiple times as business needs and data are constantly changing. This is where MLOps comes in.

# Interpreting Machine Learning Models with Python

In the history of engineering and machine learning, choosing transparent models that are interpretable for humans or end-users is essential. Practically, it means using transparent data sources and simple and easy to interpret models like linear models and decision trees or even rule-based systems despite of their limitations due to real-world scenarios where observations are nonlinear and very specific. With the massive growth of machine learning and deep learning popularity, models complexity, and the spread of AI in all fields, it has became crucial to have approaches and mechanisms to explain models and interpret accurate and inaccurate predictions.

# Customers Feedback Analysis using NLP - The Netflix Use Case

In the era of digitalization, Most companies have various sources of customers feedback, social media, call logs, mobile apps, to name a few. Therefore, analyzing such feedback to come up with actionable insights, is becoming essential for any business with an online presence.

# Bayesian Optimization

Bayesian Optimization is a useful tool for optimizing an objective function thus helping tuning machine learning models and simulations. Instead of using standard approaches like random search or grid search which are usually expensive or slow to do and where the objective function is a black box (we can not analytically express f or know its derivative), Bayesian Optimization comes in hand to efficiently trades of between exploration and exploitation to find a global optimum is a minimum of number of steps. It mainly rely on the idea of Bayes theorem , posterior = likelihood * prior, to quantify the beliefs about an unknown objective function given samples from the domain $$D$$ and their evaluation via the objective function $$f$$. Bayesian optimization incorporates prior belief about $$f$$ and updates the prior with samples drawn from f to get a posterior that better approximates $$f$$. ($$P(f|D) = P(D|f) * P(f)$$).