30

I'm rebuilding an existing build pipeline as a jenkins declarative pipeline (multi-branch-pipeline) and have a problem handling build propagation.

After packaging and stashing all relevant files the pipeline is supposed to wait for user input to trigger deployment.

If i just add an input step the current build-node is blocked. As this executor is pretty heavy i would like to move this step to a more lightweight machine.

Initially i did the job as a scripted pipeline and just created two different node('label') blocks. is there a way for me to do something similar with the declarative syntax?

node('spine') { 
    stage('builder') {
        sh 'mvn clean compile'
        stash name: 'artifact', includes: 'target/*.war'
    }
}
node('lightweight') {
    stage('wait') {
        timeout(time:5, unit:'DAYS') {
            input message:'Approve deployment?'
        }
    }
    // add deployment stages
}

I tried a couple of things already:

configuring the agent on the top-level and adding an additional agent config to the propagation step, but then i have two executors blocking as the top-level defined build-node is not stopped.

Setting agent none on top-level and configuring the agents per step. then the git checkout is not present on the first node.

EDIT 1

i reconfigured my pipeline following you advice, it currently looks like this:

pipeline {
agent none
tools {
    maven 'M3'
}
stages {
    stage('Build') {
        agent { label 'spine' }
        steps {
            checkout scm // needed, otherwise the workspace on the first step is empty
            sh "mvn clean compile"
        }
    }
    stage('Test') {
        agent { label 'spine' }
        steps {
            sh "mvn verify" // fails because the workspace is empty aggain
            junit '**/target/surefire-reports/TEST-*.xml'
        }
    }
}
}

this build will fail because the workspace does not carry over between steps as they dont run on the same executor.

EDIT 2

apparently sometimes the steps run on the same executor and sometimes don't. (we spawn build slaves on our mesos/dcos cluster on demand, so changing the executor mid build would be a problem)

I expected jenkins to just run with the current executor as long as the label in the agent definition does not change.

5 Answers 5

37

See best practice 7: Don’t: Use input within a node block. In a declarative pipeline, the node selection is done through the agent directive.

The documentation here describes how you can define none for the pipline and then use a stage-level agent directive to run the stages on the required nodes. I tried the opposite too (define a global agent on some node and then define none on stage-level for the input), but that doesn't work. If the pipeline allocated a slave, you can't release the slave for one or more specific stages.

This is the structure of our pipeline:

pipeline {
  agent none
  stages {
    stage('Build') {
      agent { label 'yona' }
      steps {
        ...
      }
    }
    stage('Decide tag on Docker Hub') {
      agent none
      steps {
        script {
          env.TAG_ON_DOCKER_HUB = input message: 'User input required',
              parameters: [choice(name: 'Tag on Docker Hub', choices: 'no\nyes', description: 'Choose "yes" if you want to deploy this build')]
        }
      }
    }
    stage('Tag on Docker Hub') {
      agent { label 'yona' }
      when {
        environment name: 'TAG_ON_DOCKER_HUB', value: 'yes'
      }
      steps {
        ...
      }
    }
  }
}

Generally, the build stages execute on a build slave labeled "yona", but the input stage runs on the master.

Sign up to request clarification or add additional context in comments.

9 Comments

this doesn't work for declarative pipelines, there is no node block.
i tried the same thing you did and bumped into a problem. apparently the workspace does not take over from one stage to the other even if it runs on the same agent. my second build step (maven builds) finds an empty workspace
So this is the equivalent of this scripted pipeline : gist.github.com/mryan43/a823a57d91f49afb989155c7decbe6be. That's 14 lines of code instead of 29... weren't declarative pipelines supposed to be simpler ?
With a pipeline in a distributed pipeline you'll need to stash the artifacts you need from one stage to another. You can also checkout the SCM repo if that's what holds the files you need. Jenkins 2 allows for a pipeline to run on any agent at any time and provided the stash mechanism to let you move around. Stash also helps if the master goes down while a stage is running, stash lets the context continue when the master comes back up and the pipeline resumes.
nice answer thank you! but what about best practice 9. Don't: Set environment variables with env global variable? I prefer assigning the result of input to a groovy variable that can also be used later in the pipeline.
|
25

Another way to do it is using the expression directive and beforeAgent, which skips the "decide" step and avoids messing with the "env" global:

pipeline {
    agent none

    stages {
        stage('Tag on Docker Hub') {
            when {
                expression {
                    input message: 'Tag on Docker Hub?'
                    // if input is Aborted, the whole build will fail, otherwise
                    // we must return true to continue
                    return true
                }
                beforeAgent true
            }

            agent { label 'yona' }

            steps {
                ...
            }
        }
    }
}

1 Comment

This is by far the best answer.
5

I know this thread is old, but I believe a solution to the "Edit 2" issue, besides stashing, is to use nested stages.

https://jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/#running-multiple-stages-with-the-same-agent-or-environment-or-options

According to this page:

... if you are using multiple agents in your Pipeline, but would like to be sure that stages using the same agent use the same workspace, you can use a parent stage with an agent directive on it, and then all the stages inside its stages directive will run on the same executor, in the same workspace.

Here is the example provided:

pipeline {
    agent none

    stages {
        stage("build and test the project") {
            agent {
                docker "our-build-tools-image"
            }
            stages {
               stage("build") {
                   steps {
                       sh "./build.sh"
                   }
               }
               stage("test") {
                   steps {
                       sh "./test.sh"
                   }
               }
            }
            post {
                success {
                    stash name: "artifacts", includes: "artifacts/**/*"
                }
            }
        }

        stage("deploy the artifacts if a user confirms") {
            input {
                message "Should we deploy the project?"
            }
            agent {
                docker "our-deploy-tools-image"
            }
            steps {
                sh "./deploy.sh"
            }
        }
    }
}

1 Comment

This was helpful. I was able to combine this answer with the one from emerino to get what I wanted.
1

use agent none on top and define agent for every stage except the stage including the input step.

source: discussion in Use a lightweight executor for a declarative pipeline stage (agent none)

Updated: what do you mean by "the git checkout is not present on the first node"? please show what you've got so far for the declarative pipeline.

Comments

0

The OP's question was how to "wait for user input in a Declarative Pipeline without blocking ...". This doesn't seem possible. Attempts to use agent: none does not free the build executor in a Declarative pipeline.

This:

pipeline {
agent none
stages {
    stage('Build') {
        agent { label 'big-boi' }
        steps {
            echo 'Stubbed'
        }
    }

    stage('Prompt for deploy') {
        agent { label 'tiny' }
        steps {
            input 'Deploy this?'
        }
    }

    stage('Deploy') {
        agent { label 'big-boi' }
        steps {
            echo "Deploying"
            build job: 'deploy-to-higher-environment'
        }
    }
}
}

... when run, looks like this:

enter image description here

... and this:

enter image description here

2 Comments

The OP's question was "How to wait for user input in a Declarative Pipeline without blocking a heavyweight executor", he's fine with blocking a lightweight one.
Hey Johntron! You can avoid blocking an agent by combining the beforeAgent option with an expression block, check my answer for more details. Btw, we miss you, hope all is going well!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.