-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Do not retrieve VM's stats on normal VM listing #8782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not retrieve VM's stats on normal VM listing #8782
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## 4.19 #8782 +/- ##
=========================================
Coverage 30.93% 30.93%
- Complexity 34263 34276 +13
=========================================
Files 5353 5354 +1
Lines 376055 376100 +45
Branches 54691 54697 +6
=========================================
+ Hits 116317 116346 +29
- Misses 244443 244453 +10
- Partials 15295 15301 +6
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
|
@JoaoJandre It is not good to change the default behavior if it is not a bug, IMHO. |
api/src/main/java/org/apache/cloudstack/api/command/user/vm/ListVMsCmd.java
Outdated
Show resolved
Hide resolved
This is because the listVirtualMachinesMetrics API (and all Metrics API) are superset of the non-metrics APIs. All the Metrics APIs add something related to metrics in their API response in addition to what a non-metrics API would return. I think we shouldn't change the default behaviour of the non-metrics API - and if you really have a use-case for this see if there's any other way to do what you're trying to accomplish or worst-case add a global setting that allows the behaviour you want but keep the setting's default in such a way that it continue old behaviour for other users. |
@weizhouapache, @sureshanaparti and @rohityadavcloud, we could change only the UI behavior; however, why do we have this default behavior? Not long ago we changed the default behavior of the |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## 4.19 #8782 +/- ##
=============================================
- Coverage 30.93% 14.96% -15.97%
+ Complexity 34263 10989 -23274
=============================================
Files 5353 5373 +20
Lines 376055 469034 +92979
Branches 54691 57597 +2906
=============================================
- Hits 116317 70177 -46140
- Misses 244443 391087 +146644
+ Partials 15295 7770 -7525
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
|
@blueorangutan package |
|
@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
|
Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 9549 |
|
@rohityadavcloud , @weizhouapache I added a configuration to control the behavior of the |
|
@blueorangutan test rocky8 kvm-rocky8 |
|
@weizhouapache a [SL] Trillian-Jenkins test job (rocky8 mgmt + kvm-rocky8) has been kicked to run smoke tests |
thanks @JoaoJandre for the update |
|
[SF] Trillian test result (tid-10170)
|
|
I like the idea of splitting which API is called for metrics vs non-metrics list view (I might or others should steal the pattern for all metrics API usage across the UI). I'm not fully satisfied with the PR @JoaoJandre yet, and I would rather encourage you can pick some ideas from #8985 - that said, I'm also inclined to make progress in a cordial and mature manner. It would be easier to get this merge and optimise the general solution further as required (by me or others). I'm working on a much wider PR that's in research and progress, that's around wider scalability issues of CloudStack (surprisingly I'm near the root cause, and it maybe possible to even get stats without much penalty). All that said - I wouldn't remember everything I write on Github on each and every PR, take my review with a pinch of salt and I may change my views on things as I'm dealing with a wider scalability problem. I'll leave some comments, but think let's go ahead. |
api/src/main/java/org/apache/cloudstack/query/QueryService.java
Outdated
Show resolved
Hide resolved
plugins/metrics/src/main/java/org/apache/cloudstack/api/ListVMsMetricsCmd.java
Outdated
Show resolved
Hide resolved
api/src/main/java/org/apache/cloudstack/api/command/user/vm/ListVMsCmd.java
Show resolved
Hide resolved
api/src/main/java/org/apache/cloudstack/api/response/UserVmResponse.java
Show resolved
Hide resolved
plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Show resolved
Hide resolved
rohityadavcloud
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some remarks, otherwise LGTM
api/src/main/java/org/apache/cloudstack/api/command/user/vm/ListVMsCmd.java
Outdated
Show resolved
Hide resolved
|
@sureshanaparti ok by you like this? |
|
@blueorangutan package |
|
@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
|
Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✖️ debian ✔️ suse15. SL-JID 9631 |
|
ping @sureshanaparti |
|
@blueorangutan package |
|
@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
|
Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 9663 |
|
@blueorangutan test |
|
@DaanHoogland a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests |
|
[SF] Trillian test result (tid-10254)
|
DaanHoogland
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am ok with this given a short notice in the release notes. (cc @JoaoJandre @sureshanaparti )
* Do not retrieve VM's stats on normal VM listing * Add config to control the behavior * address reviews
…Pools (apache#446) Following changes and improvements have been added: - Allows configuring connection pool library for database connection. As default, replaces dbcp2 connection pool library with more performant HikariCP. db.<DATABASE>.connectionPoolLib property can be set in the db.properties to use the desired library. > Set dbcp for using DBCP2 > Set hikaricp or for using HikariCP - Improvements in handling of PingRoutingCommand 1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs. 2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch 3. Optimized scanning stalled VMs - Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers` - Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine - Added caching for dynamic config keys with expiration after write set to 30 seconds. - Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0. - Added caching for some recurring DB retrievals 1. CapacityManager - listing service offerings - beneficial in host capacity calculation 2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins 3. DownloadListener - hypervisors for zone - beneficial for host joins 5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands - Optimized MS list retrieval for agent connect - Optimize finding ready systemvm template for zone - Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks - Changes in agent-agentmanager connection with NIO client-server classes 1. Optimized the use of the executor service 2. Refactore Agent class to better handle connections. 3. Do SSL handshakes within worker threads 5. Added global configs to control the behaviour depending on the infra. SSL handshake and initial processing of a new agent could be a bottleneck during agent connections. Configs - `agent.max.concurrent.new.connections` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end. 6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used. - Improvements in StatsCollection - minimize DB retrievals. - Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals. - Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools. - Minor improvements in resource limit calculations wrt DB retrievals ### Schema changes Schema changes that need to be applied if updating from 4.18.1.x [FR73B-Phase1-sql-changes.sql.txt](https://github.com/user-attachments/files/17485581/FR73B-Phase1-sql-changes.sql.txt) Upstream PR: apache#9840 ### Changes and details from scoping phase <details> <summary>Changes and details from scoping phase</summary> FR73B isn't a traditional feature FR per-se and the only way to scope this is we find class of problems and try to put them in buckets and propose a time-bound phase of developing and delivering optimisations. Instead of specific proposal on how to fix them, we're looking to find approaches and methodologies that can be applied as sprints (or short investigation/fix cycles) as well as split and do well-defined problem as separate FRs. Below are some examples of the type of problem we can find around resource contention or spikes (where resource can be CPU, RAM, DB): - Resources spikes on management server start/restart (such as maintenance led restarts) - Resource spikes on addition of Hosts - Resource spikes on deploying VMs - Resource spikes or slowness on running list APIs As an examples, the following issues were found during the scoping exercise: ### 1. Reduce CPU and DB spikes on adding hosts or restarting mgmt server (direct agents, such as Simulator) Introduced in apache#1403 this gates the logic only to XenServer where this would at all run. The specific code is only applicable for XenServer and SolidFire (https://youtu.be/YQ3pBeL-WaA?si=ed_gT_A8lZYJiEh. Hotspot took away about 20-40% CPU & DB pressures alone: <img width="1002" alt="Screenshot 2024-05-03 at 3 10 13 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/f7f86c44-f865-4734-a6fd-89bd6a85ab73"> <img width="1067" alt="Screenshot 2024-05-03 at 3 11 41 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/caa5081b-8fd6-46cd-acb1-f4c5d6b5d10f"> **After the fix:**  ### 2. Reduce DB load on capacity scans Another type of code/programming pattern wherein, we fetch all DB records only to count them and discard them. Such refactoring can reduce CPU/DB load for env with really large hosts. The common pattern in code to search is to optimise of list of hosts/hostVOs. DB hot-spot reduced by ~5-13% during aggressive scans. ### 3. Reduce DB load on Ping command Upon handling Ping commands, we try to fetch whole bunch of columns from the vm_instance (joined to other) table(s), but only use the `id` column. We can optimise and reduce DB load by only fetching the `id`. Further optimise how power reports are handled (for example, previously it calls DB query and then used an iterator -> which was optimised as doing a select query excluding list of VM ids). With 1,2,3, single management server host + simulator deployed against single MySQL 8.x DB was found to do upto 20k hosts across two cluster. ### 4. API and UI optimisation In this type of issues, the metrics API for zone and cluster were optimised, so the pages would load faster. This sort of thing may be possible across the UI, for resources that are very high in number. ### 5. Log optimisations Reducing (unnecessary) logging can improve anything b/w 5-10% improving in overall performance throughput (API or operational) ### 6. DB, SQL Query and Mgmt server CPU load Optimisations Several optimisations were possible, as an example, this was improved wherein `isZoneReady` was causing both DB scans/load and CPU hotspot: <img width="1314" alt="Screenshot 2024-05-04 at 9 19 33 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/b0749642-0819-4bb9-803a-faa9754ccefa"> The following were explored: - Using mysql slow-query logging along with index scan logging to find hotspot, along with jprofiler - Adding missing indexes to speed up queries - Reduce table scans by optimising sql query and using indexes - Optimising sql queries to remove duplicate rows (use of distinct) - Reduce CPU and DB load by using jprofiler to optimise both sql query and CPU hotspots Example fix: server: reduce CPU and DB load caused by systemvm ::isZoneReady() For this case, the sql query was fetching large number of table scans only to determine if zone has any available pool+host to launch systemvms. Accodingly the code and sql queries along with indexes optimisations were used to lower both DB scans and mgmt server CPU load. Further, tools such as EXPLAIN or EXAMPLE ANALYZE or visual explaining of queries can help optimise queries; for example, before: <img width="508" alt="Screenshot 2024-05-08 at 6 16 17 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/d85f4d19-36a2-41ee-9334-c119a4b2fc52"> After adding an index: <img width="558" alt="Screenshot 2024-05-08 at 6 22 32 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/14ef3d13-2d25-4f41-ba25-ee68e37b5b76"> Here's a bigger view of the user_vm_view that's optimised against by adding an index to user_ip_address table:  ### 7. Better DB Connection Pooling: HikariCP Several CPU and DB hotspots suggested about 20+% of time was spent to process `SELECT 1` query, which was found later wasn't necessary for JDBC 4 compliant drivers that would use a Connection::isValid to ascertain if a connection was good enough. Further, heap and GC spikes are seen due to load on mgmt server with 50k hosts. By replacing the dbcp2 based library with a more performant library with low-production overhead HikariCP, it was found the application head/GC load and DB CPU/Query load could be reduced further. For existing environments, the validation query can be set to `/* ping */ SELECT 1` which performance a lower-overhead application ping b/w mgmt server and DB. Migration to HikariCP and changes shows lower number of select query load, and about 10-15% lower cpu load: <img width="1071" alt="Screenshot 2024-05-09 at 10 56 09 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/5dbf919e-4d15-48a3-ab87-5647db666132"> <img width="372" alt="Screenshot 2024-05-09 at 10 58 40 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/9cfc80c6-eb91-4036-b7f2-1e24b6c5b78a"> Caveat: this has led to unit test failures, as many dependent on dbcp2 based assumptions, which can be fixed in due time. However, build is passing and a simulator based test-setup seems to be working. The following is telemetry of the application (mgmt server), after 50k hosts join: <img width="1184" alt="Screenshot 2024-05-10 at 12 31 09 AM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/e47cd71e-2bae-4640-949c-a457c420ab70"> <img width="1188" alt="Screenshot 2024-05-10 at 12 31 26 AM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/33dec07b-834c-44b8-a9a4-1d7502973fc7"> For 100k hosts added/joining, the connection scaling seems more better: <img width="1180" alt="Screenshot 2024-05-22 at 8 32 44 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/ee4d3c5d-4b6d-43f0-8efb-28aba64917d9"> ### 8. Using MySQL slow logs to optimise application logic and queries Using MySQL slow logs, using the following configuration was enabled: ``` slow_query_log = 1 slow_query_log_file = /var/log/mysql/mysql-slow.log long_query_time = 1 log_queries_not_using_indexes = 1 min_examined_row_limit = 100 ``` Upon analysing the slow logs, network_offering and user_vm_views related view and query & application logic for example were optimised to demonstrate how the methodology can be used to measure, find and optimise bottlenecks. It was found that queries that end up doing more table scans than the rows they returned to application (ACS mgmt server), were adding pressure on the db. - In case of network_offering_view adding an index reduced table scans. - In case of user_vm_view, it was found that MySQL was picking the wrong index that caused a lot of scans as many IPs addresses were there in the user_ip_address table. It turned to be related or same as an old MySQL server bug https://bugs.mysql.com/bug.php?id=41220 and the workaround fix was to force the relevant index. This speed up listVirtualMachines API in my test env (with 50-100k hosts) from 17s to under 200ms (measured locally). ### 9. Bottlenecks identified and categorised As part of the FR scoping effort, not everything could be possibly fixed, as an example, some of the code has been marked with FIXME or TODO that relate to hotspots discovered during the profiling process. Some of which was commented, to for example speed up host additions while reduce CPU/DB load (to allow testing of 50k-100k hosts joining). Such code can be further optimised by exploring and using new caching layer(s) that could be built using Caffein library and Hazelcast. Misc: if distributed multi-primary MySQL cluster support is to be explored: shapeblue/cloudstack-apple#437 Misc: list API optimisations may be worth back porting: apache#9177 apache#8782 </details> --------- Signed-off-by: Rohit Yadav <[email protected]> Signed-off-by: Abhishek Kumar <[email protected]> Co-authored-by: Abhishek Kumar <[email protected]> Co-authored-by: Fabricio Duarte <[email protected]>
Description
For both the
listVirtualMachinesandlistVirtualMachineMetricsAPIs, by default, ACS queries all VM details (thedetailsparameter is defaultall) and consequently queries thecloud.vm_statstable for each VM, creating a summary of statistics. Therefore, every process carried out in the UI that uses thelistVirtualMachinesAPI also loads the VM statistics, which can cause slowdowns depending on the number of records.A new configuration
return.vm.stats.on.vm.list(with default true), was added, when it is false, thelistVirtualMachinesAPI will not list the VM's metrics by default.Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Bug Severity
How Has This Been Tested?
Called listVirtualMachines API, without any parameters:
return.vm.stats.on.vm.listas true (default), all the details were returned as well;return.vm.stats.on.vm.listas false, all the details except the metrics were returned.