DevSecMLOps: Security-by-Design for Trustworthy Machine Learning Pipelines

il y a 1 jour


Toulouse, France IRIT, Université de Toulouse Temps plein

Machine Learning Operations (MLOps) has become essential to managing the lifecycle of machine learning (ML) models, enabling continuous delivery, automation, and reproducibility. However, the rapid adoption of MLOps has advanced more quickly than the integration of robust security practices. Traditional software security practices—such as static analysis, dynamic scans, and vulnerability assessments—are well established, but ML pipelines present additional unique security concerns [1] [2]. For instance, ML systems face risks like adversarial attacks, model poisoning, training data compromise, drift, and injection attacks [3]. Additionally, privacy and compliance challenges—such as protecting personally identifiable information (PII) during data ingestion and model training—introduce further complexity that traditional security methods often overlook [4]. This suggests that machine learning models require security controls tailored to their lifecycle, from data collection to training, deployment, and monitoring. Current MLOps practices lack comprehensive built-in security mechanisms tailored to ML‑specific risks and are fragmented: they either target specific threats, lack end‑to‑end traceability across the pipeline, or introduce prohibitive overhead that undermines the agility promised by MLOps. This has given rise to the emerging field of DevSecMLOps, which aims to extend the principles of DevSecOps [5, 6] to machine learning systems, ensuring both agility and security in AI‑based applications. The core problem is therefore the absence of a unified, systematic, and pipeline‑wide approach to integrate security‑by‑design into MLOps pipelines. We lack frameworks that can: Embed security requirements explicitly into ML workflows from the start, Continuously enforce and monitor these requirements across all pipeline stages, and Adapt to evolving threats without slowing down the pace of deployment. Without such an approach, organizations risk deploying AI systems that are performant but fragile, exposing them to critical security and privacy breaches. Objectives The PhD will investigate the foundations and practical mechanisms of DevSecMLOps. The specifics of security in MLOps will mainly concern privacy. Users of ML‑based solutions are legitimately concerned about the future of their data (e.g., where it is stored and who has access to it), and data anonymization is a key concern. The other facet of security (e.g., who is responsible in the event of a security problem?, how to ensure that ML models are robust against attacks and cannot be used maliciously) will also have to be taken into account. The research will focus on embedding security requirements directly into ML workflows, ensuring that threats such as data poisoning, adversarial manipulation, and privacy leakage are anticipated and mitigated early. It will also explore AI‑driven automation to support continuous security checks, balancing the rigour of security with the agility of continuous delivery. The expected result is a methodological and technical framework that operationalizes security for ML pipelines, enabling organizations to deploy AI systems that are both performant and trustworthy. Mission The PhD candidate will conduct a comprehensive study of vulnerabilities across ML lifecycles, identify the security issues associated with current MLOPs practices, and analyze how existing DevSecOps principles can be extended to MLOps. The candidate will design security‑by‑design mechanisms tailored to ML workflows, from data ingestion and preprocessing to model training and deployment. These mechanisms should be developed, while acknowledging that those systems evolve rapidly. The candidate will also explore the use of machine learning for automating security checks, generating adversarial tests, and detecting pipeline anomalies. Finally, the proposed solutions will be validated through industrial case studies (from Softeam Group), demonstrating their effectiveness in mitigating threats while maintaining reproducibility and delivery speed. References [1] X. Zhang, ‘Conceptualizing, Applying and Evaluating SecMLOps: A Paradigm for Embedding Security into the ML Lifecycle’, Carleton University, . Accessed: Sept. 08, . [Online]. Available: [2] B. Eken, S. Pallewatta, N. Tran, A. Tosun, and M. A. Babar, ‘A Multivocal Review of MLOps Practices, Challenges and Open Issues’, ACM Comput. Surv., July , doi: 10./. [3] Hinder, F., Vaquet, V., & Hammer, B. ‘Adversarial Attacks for Drift Detection’. Accessed: Sept. 08, . [Online]. Available: [4] S. Panchumarthi, ‘DevSecMLOps: A Security Framework for Machine Learning Pipelines’, Authorea Preprints. Accessed: Sept. 07, . [Online]. Available: [5] Enoiu, E. P., Truscan, D., Sadovykh, A., & Mallouli, W. (, August). VeriDevOps software methodology: security verification and validation for DevOps practices. In Proceedings of the 18th International Conference on Availability, Reliability and Security (pp. 1-9). [6] Nigmatullin, I., Sadovykh, A., Messe, N., Ebersold, S., & Bruel, J. M. (, April). RQCODE–Towards Object‑Oriented Requirements in the Software Security Domain. In IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW ) (pp. 2-6). IEEE. Starting date -04-01 Funding category Public/private mixed funding Funding further details ANR JCJC #J-18808-Ljbffr



  • Toulouse, France IRIT, Université de Toulouse Temps plein

    Topic description Context Machine Learning Operations (MLOps) has become essential to managing the lifecycle of machine learning (ML) models, enabling continuous delivery, automation, and reproducibility. However, the rapid adoption of MLOps has advanced more quickly than the integration of robust security practices. Traditional software security...


  • Toulouse, France Université de Toulouse Temps plein

    Topic description Contexte Les Machine Learning Operations (MLOps) sont devenues essentielles pour gérer le cycle de vie des modèles ML, en assurant livraison continue, automatisation et reproductibilité. Cependant, la sécurité n'a pas suivi la même évolution rapide. Les pratiques classiques de sécurité logicielle (analyses statiques, scans...


  • Toulouse, France Université de Toulouse Temps plein

    Topic description Contexte Dans le développement logiciel moderne, la pression pour livrer rapidement met souvent la sécurité au second plan. Pourtant, plus de la moitié des vulnérabilités signalées proviennent de défauts de conception, entraînant des corrections coûteuses et parfois des failles critiques. L'approche Security-by-Design vise à...

  • PhD Track: DevSecMLOps

    il y a 24 heures


    Toulouse, France IRIT, Université de Toulouse Temps plein

    A leading research institution in Toulouse is offering a PhD position focused on improving security within machine learning operations (MLOps). The role involves investigating vulnerabilities, developing tailored security mechanisms, and validating solutions through industrial case studies. Candidates should have a Master's degree in Computer Science or a...


  • Toulouse, Occitanie, France Cognitive Design by CDS Temps plein

    Company DescriptionCognitive Design by CDS is an AI-powered concurrent engineering platform that accelerates the development of high-performance, manufacturable, and sustainable products. The platform leverages a proprietary implicit-geometry engine and parametric design methods to address complex engineering challenges. It offers advanced capabilities in...


  • Toulouse, France Aiko Temps plein

    **Overview**: Joining AIKO means becoming part of a young, talented team dedicated to delivering high-quality work while maintaining a healthy work-life balance. We believe in trust, responsibility, and flexibility, offering a supportive environment where you can thrive. Our team enjoys collaborating in our Torino office, working from home when desired, and...


  • Toulouse, France ENAC - Ecole Nationale de l'Aviation Civile Temps plein

    Postdoctoral Researcher on Fleet Guidance by Deep Reinforcement Learning (FireFlies Project TSIA, ANR) ENAC - Ecole Nationale de l'Aviation Civile is seeking a highly motivated Postdoctoral Researcher to join our UAV research group focused on advancing UAV guidance through deep reinforcement learning in fleet operations. Position Overview As a Postdoctoral...

  • GCP Data Engineer

    il y a 1 jour


    Toulouse, France DGTL Performance Temps plein

    A tech company in Toulouse is looking for a Data Engineer / Expert GCP to support a new GCP-native project starting January 2026. The ideal candidate will design and build data solutions, create data pipelines, and integrate machine-learning features, while mentoring and collaborating with a full-stack team. Proficiency in GCP and SQL is essential, along...

  • Security Software Engineer

    il y a 1 semaine


    Toulouse, France Canonical Temps plein

    Security Software Engineer at Canonical Join to apply for the Security Software Engineer role at Canonical. Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is widely used in breakthrough enterprise initiatives such as public cloud, data science, AI,...


  • Toulouse, Occitanie, France Enac Temps plein

    L'ENAC, École Nationale de l'Aviation Civile, est la plus importante des Grandes Écoles ou universités aéronautiques en Europe. Elle forme à un spectre large de métiers : des ingénieurs ou des professionnels de haut niveau capables de concevoir et faire évoluer les systèmes aéronautiques et plus largement ceux du transport aérien ainsi que des...