W2 CTH - Jr. Data Architect / MLOps Engineer (Onsite at Columbus, OH)
Job Description
W2 CTH - Jr. Data Architect / MLOps Engineer (Onsite at Columbus, OH) Job Title: Jr. Data Architect/MLOps Engineer Job Locations: Columbus, OH (Hybrid) (Local Candidate Preferred) Job Duration: Contract to hire (6 months contract and then hire) Must Have:
API AWS Docker Git Python SQL Terraform
Nice To Have:
Job Summary: About Us: We are a forward-thinking team within a large enterprise bank, focused on leveraging machine learning, artificial intelligence, and data-driven solutions to improve business outcomes. As part of the MLOps team, you will be responsible for closely collaborating with data scientists, business users, product owners, and engineers to design, architect, and implement MLOps solutions that enable scalable, robust, and automated deployment of machine learning models. As an MLOps Solution Engineer, you will be responsible for converting business requirements into actionable MLOps architectures, working alongside data science teams, product owners (both on the data science and MLOps sides), and Scrum Masters. You will also work on end-to-end productionization of machine learning models and ensuring their smooth, scalable, and secure deployment into production environments. Key Responsibilities:
Collaborate with product owners and data scientists to design MLOps solutions that translate business requirements into technical specifications and architectures. Work with MLOps engineers to ensure the solutions you design are efficiently implemented and aligned with best practices. Serve as a bridge between business users, product owners, and MLOps engineers, ensuring seamless communication and alignment on project goals. Gather requirements from business stakeholders and translate them into detailed technical documentation for MLOps teams. Utilize both proprietary and open-source MLOps tools to build flexible and scalable machine learning pipelines. Identify and implement relevant open-source tools to enhance the team’s MLOps platform, ensuring long-term flexibility and efficiency. Lead the end-to-end productionization process, working from model training to deployment and monitoring in real-time or batch systems. Handle data drift, model drift, and versioning in production environments, ensuring minimal downtime and consistent model performance. Design and maintain robust CI/CD pipelines for model deployment using Azure DevOps and other tools, ensuring smooth collaboration with development teams. Design APIs for batch and real-time model inference, ensuring reliability and scalability in production. Collaborate with Kafka engineers to automate data transfer processes and ensure accurate data pipelines. Keep up with the latest developments in MLOps, integrating new open-source tools to optimize workflows. Encourage continuous learning and experimentation within the team to foster innovation in MLOps practices.
Required Skills & Qualifications:
Deep expertise in Python for scripting, automation, and building machine learning pipelines. Strong experience in managing AWS services (SageMaker, S3, Lambda) and using Terraform for infrastructure-as-code. Extensive experience with Docker containerization and Linux systems management. Proven experience in setting up and managing CI/CD pipelines for machine learning model deployment. Proficient in Git for version control of code, models, and data.
Preferred Qualifications:
Experience with large language models and productionizing ML models in a cloud environment. Knowledge of open-source MLOps frameworks such as Kubeflow, MLFlow, Airflow, or DVC. Familiarity with near real-time inference systems, batch processing, and model drift management. Seniority level Entry level Employment type Contract Job function Information Technology Industries IT Services and IT Consulting #J-18808-Ljbffr