<img src={require('./img/mlcamppost.png').default} alt="MLOPS ZOOM CAMP for ml models" width="900" height="450" /> ## MLOps Zoom Camp -- From ML Model to Production Machine Learning does not end at training a model.The real challenge begins when we move the model into production. This is where **MLOps** becomes essential [Learn more about MLOps principles](https://ml-ops.org/) This blog documents the foundational concepts of MLOps and a complete environment setup process using GitHub Codespaces, Docker, and Anaconda forming the backbone of a production-ready ML workflow. ### Understanding the Goal <img src={require('./img/mlcamp1.png').default} alt="MLOPS ZOOM CAMP for ml models" width="900" height="450" /> ### Example Use Case A practical example discussed: Predicting the duration of a taxi trip. This is a classic machine learning problem where we: - Collect historical trip data - Engineer features - Train a predictive model - Deploy it for real-time usage This transition from experimentation to deployment is where **MLOps** becomes essential. <hr /> ## The Machine Learning Lifecycle Every ML system follows three fundamental stages. <img src={require('./img/mlcamp2.png').default} alt="MLOPS ZOOM CAMP for ml models" width="900" height="450" /> ### Design Phase - Understanding the business objective - Data collection and preprocessing - Feature selection - Choosing appropriate model architecture A strong design phase prevents downstream inefficiencies. ### Training Phase - The model is trained on prepared datasets - Performance metrics are evaluated - Hyperparameters are tuned - Validation is performed The outcome is a model that performs reliably on unseen data. ### Operation Phase - Deploying the model - Monitoring performance - Logging predictions - Handling model drift - Retraining when necessary A production system must be stable, scalable, and reproducible. You can also deploy AI tools such as MindsDB on the [Nife platform](https://docs.nife.io/docs/Guides/Openhub/how-to-deploy-minddb-from-openhub). <hr/> ## Setting Up GitHub Repository <img src={require('./img/mlcamp3.png').default} alt="MLOPS ZOOM CAMP for ml models" width="900" height="450" /> ### Create Repository 1. Navigate to GitHub. 2. Click **Create New Repository**. 3. Add a repository name. 4. Add `.gitignore` and select **Python**. 5. Set visibility to **Public**. 6. Click **Create Repository**. This repository will store code, notebooks, and configurations.You can follow the official GitHub guide for [creating repositories](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository). <hr/> ## Using GitHub Codespaces GitHub Codespaces provides a cloud-based development environment. ### Create Codespace 1. Click **Code** in your repository. 2. Select **Codespaces**. 3. Click **Create Codespace on Main**. This launches a fully configured remote development environment. <hr/> ## Verifying Docker Installation Run: ``` bash docker run hello-world ``` If Docker is installed correctly, a success message will confirm that the container executed properly.Learn more about [Docker containers](https://docs.docker.com/get-started/) <hr/> ## Opening in VS Code - Click **Open in VS Code** - Ensure VS Code Desktop is installed - Install the GitHub Codespaces extension if required <hr/> ## Installing Anaconda ### Step 1: Download Installer ``` bash wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh ``` Run installer: ``` bash bash Anaconda3-2022.05-Linux-x86_64.sh ``` <hr/> ### Step 2: Initialize Anaconda When prompted: > Do you want to initialize Anaconda? Type: ``` bash yes ``` Restart terminal: ``` bash bash ``` <hr/> ### Step 3: Verify Installation ``` bash which python python -V ``` <hr/> ## Working with Jupyter Notebook ``` bash cd 01-intro jupyter notebook ``` If prompted for authentication: - Copy the token from the terminal\ - Paste it into the browser <hr/> ## Dependency Installation and Git Workflow Inside the notebook: ``` python import pandas as pd pd.__version__ ``` Install PyArrow: ``` bash pip install pyarrow ``` After making changes: ``` bash git status git add 01-intro git status git push ``` <hr/> ## Key Takeaways - MLOps bridges the gap between experimentation and production. - Environment configuration is critical for reproducibility. - Docker enables containerized deployment. - Anaconda simplifies dependency management. - Git ensures version control and collaboration. - Codespaces provides scalable cloud-based development. [Nife.io](https://nife.io/case-study/rags_llm) also supports deployment of AI and LLM workloads using optimized GPU infrastructure and automated model deployment workflows. <hr/> ## Conclusion Machine Learning in isolation is incomplete.True impact comes when models are deployed, monitored, and maintained in production systems. By setting up a structured environment using GitHub, Docker, and Anaconda, we establish the foundation required for scalable and reliable ML operations. This marks the beginning of building production-grade machine learning systems. Modern infrastructure platforms like **Nife** enable developers to deploy applications across distributed cloud and edge environments, making it easier to run production-ready AI and ML workloads.Learn more about the [Nife.io](https://nife.io/)