-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Taking fast and efficient volume snapshots with XenServer (and your storage provider) #1403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Here's a copy of my Marvin integration tests (I added a .txt file type so that I could upload the file as it's not permitted to upload a file of type .py): Here are the most recent test results: |
| if (errMsg == null) { | ||
| snapshotInfo.processEvent(Event.OperationSuccessed); | ||
| if (copyCmdAnswer == null || !copyCmdAnswer.getResult()) { | ||
| if (copyCmdAnswer != null && copyCmdAnswer.getDetails() != null && !copyCmdAnswer.getDetails().isEmpty()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mike-tutkowski Could you please use StringUtils.isEmpty (https://commons.apache.org/proper/commons-lang/javadocs/api-2.6/org/apache/commons/lang/StringUtils.html#isEmpty(java.lang.String))?
This is a null safe test if the String copyCmdAnswer.getDetails is empty (you can use this method at lines 260, 363 and 403 -- in the StorageSystemDataMotionStrategy class).
Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done - thanks
|
Thanks for the comments, Gabriel. I have pushed a new commit with those updates. |
| return Boolean.parseBoolean(property); | ||
| } | ||
|
|
||
| private Void handleCreateTemplateFromSnapshot(SnapshotInfo snapshotInfo, TemplateInfo templateInfo, AsyncCompletionCallback<CopyCommandResult> callback) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @mike-tutkowski ,
Consider changing "Void" to "void" and removing the "return null;" at the end of this method.
Another suggestion is to create a single method, if possible, for the lines 260 to 266 and 363 to 371, this would avoid some duplication of code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Initially this method was patterned off of an existing set of methods that used the "Void" approach. I wasn't sure if there was a good reason for why they did it that way and erred on the side of caution and just repeated the pattern.
As it turned out, someone later changed those methods from "Void" to "void" and so I ended up doing so in my sandbox (but haven't yet pushed the changes).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see... Using "Void" instead of "void" is not a good programming practice and should be "avoided" =P.
Another suggestion that I forgot to mention is to squash your commits into one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe our current best practice is to not squash commits (i.e. to preserve the history).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mike-tutkowski if commits are atomic and not breaking ci, yes, else, no. Please use your own judgement.
|
LGTM. I've also been testing this in my dev environment with no issues so far |
|
Mike Tutkowski on [email protected] replies: |
e2c8be2 to
a425e41
Compare
005367b to
61352d6
Compare
ACS CI BVT RunSumarry: The follwing tests have known issues Link to logs Folder (search by build_no): https://www.dropbox.com/sh/yj3wnzbceo9uef2/AAB6u-Iap-xztdm6jHX9SjPja?dl=0 Failed tests:
Skipped tests: Passed test suits: |
ecc5e4d to
af73300
Compare
0270bcc to
b93daca
Compare
|
Here are my updated Marvin tests for this feature (with a "txt" extension because GitHub doesn't allow you to upload files with a "py" extension): TestSnapshots.py.txt |
|
Can you rebase and give me a squashed commit for this? Also, regarding the code. I have seen a lot of people cleaning up code to remove the I will add this to my queue to run a full CI against it again. Also, I am looking for two distinct LGTM for every PR, so I would like another LGTM code review. Can someone from the dev list please give me a review? Thx... |
|
Hi @swill - No problem for me to squash the commits, but are you sure you want to lose the history of the individual commits? Also, from what I've seen over the years, it seems more common to prefix instance member variables with "_" than to not do so, but I'm not sure if we have a coding guideline around that or not. Can you clarify, too, if a LGTM has to come from a committer or can it be any dev on the list? If any dev on the list, we have Syed and I believe Gabriel (although I didn't see a literal LGTM from him, but rather he provided his comments on the code). Thanks! |
|
I have not seen a style guide, but I have not looked. I have just seen a bunch of PRs recently and there seems to be a consistent work to remove the I am not requiring the LGTM code reviews to come from committers. If you are active on on Github and are adding value to the project by reviewing code, I am willing to work with that. Because I am not being strict about who is reviewing the code, I am also enforcing at least 2 people review the code so I get more eyes on ever PR. As for the squash. It may make sense to have some history, but there are a lot of commits in there that are entirely useless outside the greater context of this PR. To me, the final version of the code merged trumps all. If it is squashed then we can more easily evaluate the changes associated with this commit later. If you think there are some points in the history of this feature that make sense to be called out as important, you can squash into 3 or 4 commits, but 43 commits for a single PR is crazy and will add more confusion than clarity. |
|
No problem, @swill, I can just squash them all. I was just thinking that the fact that we are merging can preserve the individual history better (in a different branch that makes its way back to the master branch), but I'm fine with just dropping the history and having a single commit. |
b93daca to
1396cad
Compare
|
@mike-tutkowski @swill I don't think a formal guideline has been set, but we have been pushing toward adopting more idiomatic Java naming conventions. The use of |
|
@mike-tutkowski why can't the Marbin test cases be checked in? While we can't run them, it would be nice to review them to understand exactly how the feature is being tested. We can exclude them by default to avoid running into problems on full runs that don't have SolidFire hardware available. |
|
The CloudStack community can definitely have those tests, if it wants them. Right now, however, we don't have any way to enable other people (besides those who own SolidFire equipment) to successfully run those tests. Perhaps SolidFire will have a virtual node at some point that can be handed over to Apache Infra for this purpose. |
| private boolean _executeInSequence = true; | ||
|
|
||
| public ResignatureCommand(final Map<String, String> details) { | ||
| _details = details; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copy the details to an ImmutableMap map to avoid any side-effects by the caller making subsequence changes to the map. It will also prevent the same side-effects when returned from getDetails.
|
I'm not sure what the standard for "idiomatic" is here. :) I've pretty much usually seen _ (and sometimes m_) used to note a member variable versus a local variable. That is then used in concert with camel case. Also, for static member variables that are public, I usually have seen all upper case with individual words separated by an underscore (but the variable not prefixed with an underscore). We should probably develop a community standard. |
| } | ||
|
|
||
| private boolean canHandle(DataStore dataStore) { | ||
| private boolean canHandle(DataObject dataObject) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- What is
dataObjectisnull? - Would it be possible to add some unit test cases for this method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say that that is an infrastructure fail on the part of CloudStack (passing in null there). We could defend against it, but then we should probably do so everywhere where canHandle is implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mike-tutkowski while it's unlikely, we should defend against if for nothing more than providing a better error than an NPE. Seems like a good place to use Preconditions.checkArgument with a meaningful error message. It also better pinpoints the origin of the failure.
|
@DaanHoogland I think I addressed all your concerns. I plan to rebuild this locally, then push to GitHub in a bit. |
|
@DaanHoogland I had to put the "Void" return types back. It is used for AOP and won't compile with "void" for those two methods. |
bbeaaf1 to
d52ef81
Compare
| } | ||
|
|
||
| return true; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DAO calls in the for loop are concerning from a performance perspective (both for the speed of this call and the load placed on the database). Ideally, it would be calculated in a single join. Would it be possible to craft a query across the details for all hosts in a cluster where the detail key = "supportsResign" and the value = "true"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't there a limit of like 16 - 32 hosts in a cluster (therefore only 16 - 32 DB calls in the for loop)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See what you think...I just added a method on the HostDetailsDao to return a Map where the key is the ID of the host and the value is true or false (you pass in the "name" field you are interested in).
This way we just do one query and then look up the results in the map each time through the "for" loop.
If we want to do a join of the host, host_details, and cluster tables (which, I believe, would be necessary to reduce the results coming back from the DB), perhaps you know of somewhere in our codebase where we do some similar action? Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think we just need to join the host and the host_details tables (because cluster ID should be in the host table).
We'd want all of the hosts with a particular cluster ID selected from the host table and then we'd want to join on host ID in the host_details table and further refine the returned hosts by those that have a "supportsResign" value in the name field.
Do you happen to know of any code of ours that allows you to SQL filter the returned values based on values that exist in not just one, but two tables? From what I've seen, we seem to be pretty heavily filtering only on data present in one table (even when we join with another table).
If this were just raw SQL, it would be pretty easy, but we've got that Java DB layer that this needs to fit within.
d52ef81 to
45e2bdc
Compare
45e2bdc to
2bd035d
Compare
|
@jburwell @DaanHoogland I have a solution for quickly looking up if a cluster supports resigning that I think we'll all be happy with. Upon a host connecting to the management server is when I check to see if the host supports resigning (and update the host_details table). I added logic to this connection code to not only update the host_details table with info about resigning, but also the cluster_details table about resigning. A host connecting to the management server should not be a frequent occurrence, so I believe it's OK at this point to run through the list of hosts in the cluster of the connecting host and see if all of those hosts support resigning. If they do, then I update the cluster_details table that the cluster in question supports resigning. I also changed the logic to not bother to add a row to either the host_details table or the cluster_details table if the "supportsResign" property would be false (only a true value is stored now). We can then save space by understanding that a missing "supportsResign" property for a host or cluster indicates false (I was proceeding under that assumption anyways). When the time comes to ask the cluster if it supports resigning, it's a simple matter of looking for the "supportsResign" property in the cluster_details table. |
|
@DaanHoogland in my experience, stack traces are a critical piece of information for operationally debugging CloudStack. Unfortunately, our logging lacks the clarity and specify to use the error message alone. I hope our logging improves overtime to omit them from |
38f8cdf to
aa6f12b
Compare
|
Then log the warning with just the message and add a debug statement with the stacktrace. logging stacktraces at a level even more strict then INFO does not compute. |
aa6f12b to
d9e2d62
Compare
d9e2d62 to
9d21556
Compare
CI RESULTSAssociated Uploads
Uploads will be available until Comment created by |
|
this one is ready... |
Taking fast and efficient volume snapshots with XenServer (and your storage provider)A XenServer storage repository (SR) and virtual disk image (VDI) each have UUIDs that are immutable. This poses a problem for SAN snapshots, if you intend on mounting the underlying snapshot SR alongside the source SR (duplicate UUIDs). VMware has a solution for this called re-signaturing (so, in other words, the snapshot UUIDs can be changed). This PR only deals with the CloudStack side of things, but it works in concert with a new XenServer storage manager created by CloudOps (this storage manager enables re-signaturing of XenServer SR and VDI UUIDs). I have written Marvin integration tests to go along with this, but cannot yet check those into the CloudStack repo as they rely on SolidFire hardware. If anyone would like to see these integration tests, please let me know. JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9281 Here's a video I made that shows this feature in action: https://www.youtube.com/watch?v=YQ3pBeL-WaA&list=PLqOXKM0Bt13DFnQnwUx8ZtJzoyDV0Uuye&index=13 * pr/1403: Faster logic to see if a cluster supports resigning Support for backend snapshots with XenServer Signed-off-by: Will Stevens <[email protected]>
…Pools (apache#446) Following changes and improvements have been added: - Allows configuring connection pool library for database connection. As default, replaces dbcp2 connection pool library with more performant HikariCP. db.<DATABASE>.connectionPoolLib property can be set in the db.properties to use the desired library. > Set dbcp for using DBCP2 > Set hikaricp or for using HikariCP - Improvements in handling of PingRoutingCommand 1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs. 2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch 3. Optimized scanning stalled VMs - Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers` - Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine - Added caching for dynamic config keys with expiration after write set to 30 seconds. - Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0. - Added caching for some recurring DB retrievals 1. CapacityManager - listing service offerings - beneficial in host capacity calculation 2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins 3. DownloadListener - hypervisors for zone - beneficial for host joins 5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands - Optimized MS list retrieval for agent connect - Optimize finding ready systemvm template for zone - Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks - Changes in agent-agentmanager connection with NIO client-server classes 1. Optimized the use of the executor service 2. Refactore Agent class to better handle connections. 3. Do SSL handshakes within worker threads 5. Added global configs to control the behaviour depending on the infra. SSL handshake and initial processing of a new agent could be a bottleneck during agent connections. Configs - `agent.max.concurrent.new.connections` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end. 6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used. - Improvements in StatsCollection - minimize DB retrievals. - Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals. - Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools. - Minor improvements in resource limit calculations wrt DB retrievals ### Schema changes Schema changes that need to be applied if updating from 4.18.1.x [FR73B-Phase1-sql-changes.sql.txt](https://github.com/user-attachments/files/17485581/FR73B-Phase1-sql-changes.sql.txt) Upstream PR: apache#9840 ### Changes and details from scoping phase <details> <summary>Changes and details from scoping phase</summary> FR73B isn't a traditional feature FR per-se and the only way to scope this is we find class of problems and try to put them in buckets and propose a time-bound phase of developing and delivering optimisations. Instead of specific proposal on how to fix them, we're looking to find approaches and methodologies that can be applied as sprints (or short investigation/fix cycles) as well as split and do well-defined problem as separate FRs. Below are some examples of the type of problem we can find around resource contention or spikes (where resource can be CPU, RAM, DB): - Resources spikes on management server start/restart (such as maintenance led restarts) - Resource spikes on addition of Hosts - Resource spikes on deploying VMs - Resource spikes or slowness on running list APIs As an examples, the following issues were found during the scoping exercise: ### 1. Reduce CPU and DB spikes on adding hosts or restarting mgmt server (direct agents, such as Simulator) Introduced in apache#1403 this gates the logic only to XenServer where this would at all run. The specific code is only applicable for XenServer and SolidFire (https://youtu.be/YQ3pBeL-WaA?si=ed_gT_A8lZYJiEh. Hotspot took away about 20-40% CPU & DB pressures alone: <img width="1002" alt="Screenshot 2024-05-03 at 3 10 13 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/f7f86c44-f865-4734-a6fd-89bd6a85ab73"> <img width="1067" alt="Screenshot 2024-05-03 at 3 11 41 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/caa5081b-8fd6-46cd-acb1-f4c5d6b5d10f"> **After the fix:**  ### 2. Reduce DB load on capacity scans Another type of code/programming pattern wherein, we fetch all DB records only to count them and discard them. Such refactoring can reduce CPU/DB load for env with really large hosts. The common pattern in code to search is to optimise of list of hosts/hostVOs. DB hot-spot reduced by ~5-13% during aggressive scans. ### 3. Reduce DB load on Ping command Upon handling Ping commands, we try to fetch whole bunch of columns from the vm_instance (joined to other) table(s), but only use the `id` column. We can optimise and reduce DB load by only fetching the `id`. Further optimise how power reports are handled (for example, previously it calls DB query and then used an iterator -> which was optimised as doing a select query excluding list of VM ids). With 1,2,3, single management server host + simulator deployed against single MySQL 8.x DB was found to do upto 20k hosts across two cluster. ### 4. API and UI optimisation In this type of issues, the metrics API for zone and cluster were optimised, so the pages would load faster. This sort of thing may be possible across the UI, for resources that are very high in number. ### 5. Log optimisations Reducing (unnecessary) logging can improve anything b/w 5-10% improving in overall performance throughput (API or operational) ### 6. DB, SQL Query and Mgmt server CPU load Optimisations Several optimisations were possible, as an example, this was improved wherein `isZoneReady` was causing both DB scans/load and CPU hotspot: <img width="1314" alt="Screenshot 2024-05-04 at 9 19 33 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/b0749642-0819-4bb9-803a-faa9754ccefa"> The following were explored: - Using mysql slow-query logging along with index scan logging to find hotspot, along with jprofiler - Adding missing indexes to speed up queries - Reduce table scans by optimising sql query and using indexes - Optimising sql queries to remove duplicate rows (use of distinct) - Reduce CPU and DB load by using jprofiler to optimise both sql query and CPU hotspots Example fix: server: reduce CPU and DB load caused by systemvm ::isZoneReady() For this case, the sql query was fetching large number of table scans only to determine if zone has any available pool+host to launch systemvms. Accodingly the code and sql queries along with indexes optimisations were used to lower both DB scans and mgmt server CPU load. Further, tools such as EXPLAIN or EXAMPLE ANALYZE or visual explaining of queries can help optimise queries; for example, before: <img width="508" alt="Screenshot 2024-05-08 at 6 16 17 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/d85f4d19-36a2-41ee-9334-c119a4b2fc52"> After adding an index: <img width="558" alt="Screenshot 2024-05-08 at 6 22 32 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/14ef3d13-2d25-4f41-ba25-ee68e37b5b76"> Here's a bigger view of the user_vm_view that's optimised against by adding an index to user_ip_address table:  ### 7. Better DB Connection Pooling: HikariCP Several CPU and DB hotspots suggested about 20+% of time was spent to process `SELECT 1` query, which was found later wasn't necessary for JDBC 4 compliant drivers that would use a Connection::isValid to ascertain if a connection was good enough. Further, heap and GC spikes are seen due to load on mgmt server with 50k hosts. By replacing the dbcp2 based library with a more performant library with low-production overhead HikariCP, it was found the application head/GC load and DB CPU/Query load could be reduced further. For existing environments, the validation query can be set to `/* ping */ SELECT 1` which performance a lower-overhead application ping b/w mgmt server and DB. Migration to HikariCP and changes shows lower number of select query load, and about 10-15% lower cpu load: <img width="1071" alt="Screenshot 2024-05-09 at 10 56 09 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/5dbf919e-4d15-48a3-ab87-5647db666132"> <img width="372" alt="Screenshot 2024-05-09 at 10 58 40 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/9cfc80c6-eb91-4036-b7f2-1e24b6c5b78a"> Caveat: this has led to unit test failures, as many dependent on dbcp2 based assumptions, which can be fixed in due time. However, build is passing and a simulator based test-setup seems to be working. The following is telemetry of the application (mgmt server), after 50k hosts join: <img width="1184" alt="Screenshot 2024-05-10 at 12 31 09 AM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/e47cd71e-2bae-4640-949c-a457c420ab70"> <img width="1188" alt="Screenshot 2024-05-10 at 12 31 26 AM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/33dec07b-834c-44b8-a9a4-1d7502973fc7"> For 100k hosts added/joining, the connection scaling seems more better: <img width="1180" alt="Screenshot 2024-05-22 at 8 32 44 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/ee4d3c5d-4b6d-43f0-8efb-28aba64917d9"> ### 8. Using MySQL slow logs to optimise application logic and queries Using MySQL slow logs, using the following configuration was enabled: ``` slow_query_log = 1 slow_query_log_file = /var/log/mysql/mysql-slow.log long_query_time = 1 log_queries_not_using_indexes = 1 min_examined_row_limit = 100 ``` Upon analysing the slow logs, network_offering and user_vm_views related view and query & application logic for example were optimised to demonstrate how the methodology can be used to measure, find and optimise bottlenecks. It was found that queries that end up doing more table scans than the rows they returned to application (ACS mgmt server), were adding pressure on the db. - In case of network_offering_view adding an index reduced table scans. - In case of user_vm_view, it was found that MySQL was picking the wrong index that caused a lot of scans as many IPs addresses were there in the user_ip_address table. It turned to be related or same as an old MySQL server bug https://bugs.mysql.com/bug.php?id=41220 and the workaround fix was to force the relevant index. This speed up listVirtualMachines API in my test env (with 50-100k hosts) from 17s to under 200ms (measured locally). ### 9. Bottlenecks identified and categorised As part of the FR scoping effort, not everything could be possibly fixed, as an example, some of the code has been marked with FIXME or TODO that relate to hotspots discovered during the profiling process. Some of which was commented, to for example speed up host additions while reduce CPU/DB load (to allow testing of 50k-100k hosts joining). Such code can be further optimised by exploring and using new caching layer(s) that could be built using Caffein library and Hazelcast. Misc: if distributed multi-primary MySQL cluster support is to be explored: shapeblue/cloudstack-apple#437 Misc: list API optimisations may be worth back porting: apache#9177 apache#8782 </details> --------- Signed-off-by: Rohit Yadav <[email protected]> Signed-off-by: Abhishek Kumar <[email protected]> Co-authored-by: Abhishek Kumar <[email protected]> Co-authored-by: Fabricio Duarte <[email protected]>
Introduced in apache/cloudstack#1403 this gates the logic only to XenServer where this would at all run. The specific code is only applicable for XenServer and SolidFire (https://youtu.be/YQ3pBeL-WaA?si=ed_gT_A8lZYJiEh. Signed-off-by: Rohit Yadav <[email protected]>
A XenServer storage repository (SR) and virtual disk image (VDI) each have UUIDs that are immutable.
This poses a problem for SAN snapshots, if you intend on mounting the underlying snapshot SR alongside the source SR (duplicate UUIDs).
VMware has a solution for this called re-signaturing (so, in other words, the snapshot UUIDs can be changed).
This PR only deals with the CloudStack side of things, but it works in concert with a new XenServer storage manager created by CloudOps (this storage manager enables re-signaturing of XenServer SR and VDI UUIDs).
I have written Marvin integration tests to go along with this, but cannot yet check those into the CloudStack repo as they rely on SolidFire hardware.
If anyone would like to see these integration tests, please let me know.
JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9281
Here's a video I made that shows this feature in action:
https://www.youtube.com/watch?v=YQ3pBeL-WaA&list=PLqOXKM0Bt13DFnQnwUx8ZtJzoyDV0Uuye&index=13