timja / jenkins-gh-issues-poc-06-20

0 stars 0 forks source link

[JENKINS-4646] Ability to wipe out workspace on only selected node #250

Open timja opened 15 years ago

timja commented 15 years ago

I recently created a new job and I thought tied it to a particular slave. Later
I realized that I had in fact forgotten to tie it to that slave, and the first
few builds ran on master. I corrected this and now it is running on the slave as
desired.

However now the master has an old copy of the job's workspace too. Since the
workspace is ~1 Gb, I would like to clean it up. But I want to leave the
actively used workspace on the slave intact, since recreating it is much slower
than running an incremental build - needs to do a full SCM checkout and then do
some big downloads.

Unfortunately there seems to be no way through Hudson's GUI to delete the
workspace on a particular node (master, in my case). I presume "Wipe Out
Workspace" would delete all copies.

Not sure what the best GUI would be, but perhaps the .../wipeOutWorkspace page
which displays a confirmation button could have a list of checkboxes, all
initially checked, listing the nodes on which a copy of the workspace currently
resides (if there is >1 such node). You could uncheck some of them if you
wished. The node on which the last build ran should be highlighted.


Originally reported by jglick, imported from: Ability to wipe out workspace on only selected node
  • status: Open
  • priority: Major
  • resolution: Unresolved
  • imported: 2022-06-20
timja commented 14 years ago

bklarson:

I've ran into a slightly similar problem. I have several slaves, and due to a repository bug the repository on a few of them became corrupt. I need to clear the workspace on all slaves, but the 'Wipe Out Workspace' button just deletes the copy on the most recently built slave.

I agree with the proposed interface - it'd be nice to see a list of checkboxes, 1 for each slave.

timja commented 14 years ago

mrpotes:

I have the same issue as bklarson. The suggested fix in the description would be great.

timja commented 12 years ago

pmv:

At minimum could someone change the wording from 'Wipe Out Workspace' to 'Wipe Out Current Workspace' until this is looked at? I think that would better describe the current functionality.

Ideally for us 'Wipe Out Workspace' would delete the workspace from all slaves, since we don't tie jobs to slaves and the broken job may not be the most recent one. The proposed checkbox solution would work well.

timja commented 11 years ago

astraujums:

The following Groovy script wipes workspaces of certain jobs on all nodes. Execute it from /computer/(master)/script

Something like this could be implemented as a command "Wipe Out All Workspaces".

import hudson.model.*
// For each job
for (item in Hudson.instance.items)
{
  jobName = item.getFullDisplayName()
  // check that job is not building
  if (!item.isBuilding())
  {
    // TODO: Modify the following condition to select which jobs to affect
    if (jobName == "MyJob")
    {
      println("Wiping out workspaces of job " + jobName)
      customWorkspace = item.getCustomWorkspace()
      println("Custom workspace = " + customWorkspace)

      for (node in Hudson.getInstance().getNodes())
      {
println("  Node: " + node.getDisplayName())
workspacePath = node.getWorkspaceFor(item)
if (workspacePath == null)
{
  println("    Could not get workspace path")
}
else
{
  if (customWorkspace != null)
  {
    workspacePath = node.getRootPath().child(customWorkspace)
  }

  pathAsString = workspacePath.getRemote()
  if (workspacePath.exists())
  {
    workspacePath.deleteRecursive()
    println("    Deleted from location " + pathAsString)
  }
  else
  {
    println("    Nothing to delete at " + pathAsString)
  }
}
      }
    }
  }
  else
  {
    println("Skipping job " + jobName + ", currently building")
  }
}
timja commented 6 years ago

smd:

Our use case is similar: the repository workspace (tfs) has become corrupt (due to manual deletion of workspace folders) on one (or more) agents. You think you've cleaned everything up and then months later a job is executed by a corrupt agent.

timja commented 2 years ago

[Originally duplicated by: JENKINS-17098]

timja commented 2 years ago

[Originally related to: JENKINS-9898]

timja commented 2 years ago

[Originally related to: JENKINS-26138]

timja commented 2 years ago

[Originally related to: JENKINS-6216]