Skip to content

Conversation

@likitha
Copy link

@likitha likitha commented Jun 30, 2015

…ide storage.

While live migrating a volume, CS chooses the endpoint to perform the migration by selecting any host that has the storage containing the volume mounted on it.
Instead, if the volume is attached to a running VM, the endpoint chosen by CS should be the host that contains the VM.

…ide storage.

While live migrating a volume, CS chooses the endpoint to perform the migration by selecting any host that has the storage containing the volume mounted on it.
Instead, if the volume is attached to a running VM, the endpoint chosen by CS should be the host that contains the VM.
@asfbot
Copy link

asfbot commented Jun 30, 2015

cloudstack-pull-requests #674 SUCCESS
This pull request looks good

@likitha
Copy link
Author

likitha commented Jul 1, 2015

@wilderrodrigues ,
Background -

  1. When CS tries to live migrate a volume, it chooses a host endpoint to perform the migration by selecting any host that has the storage containing the volume mounted on it. Now when we attempt to migrate a volume that is on a zone wide storage, the endpoint could be any host in the zone because in case of zone wide storage, every host in the zone has the storage mounted on it.
  2. During migration, vCenter expects the target datastore/storage pool to be mounted on the source host because the VM obviously needs to be able to access the target datastore.

Now in the mentioned issue, say a host that doesn't contain the VM is chosen as the endpoint. As mentioned above, it could happen because today CS chooses any host that has the source storage mounted on it. In that case migration will fail because this host cant see the target datastore.

By picking the host containing the volume's VM as endpoint we ensure that both the source and target storage pools are visible to the VM irrespective of the scope of the storage pool.

Previously, similar fixes have been made for other operations like volume deletion, snapshot creation because in these cases as well the underlying resource needs the VM of the volume to be visible to the host endpoint.

@wilderrodrigues
Copy link
Contributor

Hi @likitha

I understood what the issue says, but what I really meant concerns what the code does.

The only practical change in the code was this:

if (volume.getHypervisorType() == Hypervisor.HypervisorType.Hyperv || volume.getHypervisorType() == Hypervisor.HypervisorType.VMware) {
...
}

Which means that now it will also get in the IF when the Hypervisor is of HypervisorType.VMware type.

So, what you really want is: support volume migration from zone-wide to cluster-wide storage when hypervisor is VMware.

Is that correct?

If you look further at the code, when CS will do DELETEVOLUME, it would take into account only VMware. So, HyperV wouldn't be supported.

image

Cheers,
Wilder

@likitha
Copy link
Author

likitha commented Jul 1, 2015

Yes, now in addition to HyperV, even in case of VMware we want CS to chose the host containing the VM as the endpoint when the storage actions is 'MIGRATEVOLUME'.
Since the code to do that already exists because of HyperV, I simply added a check for VMware there.

@sateesh-chodapuneedi
Copy link
Member

LGTM

@wilderrodrigues
Copy link
Contributor

Hi @likitha

And what about DeleteVolume? It now supports only VMware, but the MigrateVolume supports both VMware and HyperV. It would be nice to keep it consistent.

I have 2 suggestions:

  1. Edit the issue title to make sure that it complies with your fix. Something like this: Failed to migrate a volume from zone-wide to cluster-wide storage when using VMware.
  2. If you don't mind, create another issue to deal with the StorageAction.DELETEVOLUME and change it in a separate PR. I think it should be consistent.

You got my LGTM based on the explanations you gave.

Cheers,
Wilder

@likitha
Copy link
Author

likitha commented Jul 1, 2015

Thanks for taking the time to review @wilderrodrigues.
I have updated the issue description.
As far as 'StorageAction.DELETEVOLUME' is concerned I am not entirely sure if the same logic applies to HyperV, In case of VMware since we can't access volume managed object, we reconfigure the VM managed object to detach the volume and further delete it. And that is why we choose the endpoint based on the VM. I am not sure if we have the same problem in HyperV. In any case I will open a ticket to track it, that way we a HyperV expert can comment on it.

@asfgit asfgit closed this in 299c07c Jul 1, 2015
rohityadavcloud pushed a commit that referenced this pull request Jan 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants