MLOps Engineer
Source: Himalayas
AI Summary Powered by Gemini
This MLOps Engineer role focuses on deploying, monitoring, and maintaining machine learning models in production, requiring strong Python, CI/CD, and cloud platform experience. The opportunity offers a remote, full-time position at the intersection of ML, DevOps, and software engineering.
Job Description
MLOps Engineer (Remote)Location:Remote (Global)Employment Type:Full-TimeAbout the RoleWe are seeking a highly skilled MLOps Engineer to join our growing team. In this role, you will be responsible for deploying, monitoring, and maintaining machine learning models in production environments, ensuring reliability, scalability, and performance.You will work at the intersection of machine learning, DevOps, and software engineering, enabling seamless integration of AI models into business systems through robust CI/CD pipelines and automation.Key ResponsibilitiesDesign, build, and maintain end-to-end ML pipelinesDeploy machine learning models into production environmentsImplement and manage CI/CD pipelines for ML workflowsMonitor model performance, data drift, and system healthAutomate model retraining and versioning processesCollaborate with Data Scientists and Engineers to productionise modelsEnsure scalability, reliability, and security of ML systemsManage cloud infrastructure for ML workloads (AWS, Azure, or GCP)Troubleshoot and resolve issues in production ML systemsRequired Skills & ExperienceStrong experience in MLOps or DevOps within ML environmentsProficiency in Python and scripting for automationExperience with ML frameworks (TensorFlow, PyTorch, Scikit-learn)Hands-on experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI)Knowledge of containerisation (Docker) and orchestration (Kubernetes)Experience with cloud platforms (AWS, Azure, or GCP)Familiarity with model monitoring tools (e.g., Prometheus, MLflow, Evidently)Understanding of data pipelines and ETL processesExperience with version control systems (Git)Nice to HaveExperience with Kubeflow, Airflow, or SageMakerKnowledge of infrastructure as code (Terraform, CloudFormation)Exposure to feature stores and model registriesExperience with real-time/streaming data systems (Kafka, Spark)Key CompetenciesStrong problem-solving and analytical thinkingAbility to work independently in a remote environmentExcellent collaboration and communication skillsDetail-oriented with a focus on system reliabilityWhat We OfferFully remote work environmentFlexible working hoursOpportunity to work on cutting-edge AI/ML systemsCollaborative and innovative team cultureCompetitive salary and benefitsOriginally posted on Himalayas
Full Description
MLOps Engineer (Remote)Location:Remote (Global)Employment Type:Full-TimeAbout the RoleWe are seeking a highly skilled MLOps Engineer to join our growing team. In this role, you will be responsible for deploying, monitoring, and maintaining machine learning models in production environments, ensuring reliability, scalability, and performance.You will work at the intersection of machine learning, DevOps, and software engineering, enabling seamless integration of AI models into business systems through robust CI/CD pipelines and automation.Key ResponsibilitiesDesign, build, and maintain end-to-end ML pipelinesDeploy machine learning models into production environmentsImplement and manage CI/CD pipelines for ML workflowsMonitor model performance, data drift, and system healthAutomate model retraining and versioning processesCollaborate with Data Scientists and Engineers to productionise modelsEnsure scalability, reliability, and security of ML systemsManage cloud infrastructure for ML workloads (AWS, Azure, or GCP)Troubleshoot and resolve issues in production ML systemsRequired Skills & ExperienceStrong experience in MLOps or DevOps within ML environmentsProficiency in Python and scripting for automationExperience with ML frameworks (TensorFlow, PyTorch, Scikit-learn)Hands-on experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI)Knowledge of containerisation (Docker) and orchestration (Kubernetes)Experience with cloud platforms (AWS, Azure, or GCP)Familiarity with model monitoring tools (e.g., Prometheus, MLflow, Evidently)Understanding of data pipelines and ETL processesExperience with version control systems (Git)Nice to HaveExperience with Kubeflow, Airflow, or SageMakerKnowledge of infrastructure as code (Terraform, CloudFormation)Exposure to feature stores and model registriesExperience with real-time/streaming data systems (Kafka, Spark)Key CompetenciesStrong problem-solving and analytical thinkingAbility to work independently in a remote environmentExcellent collaboration and communication skillsDetail-oriented with a focus on system reliabilityWhat We OfferFully remote work environmentFlexible working hoursOpportunity to work on cutting-edge AI/ML systemsCollaborative and innovative team cultureCompetitive salary and benefitsOriginally posted on Himalayas