Skip to content

Conversation

@JoaoJandre
Copy link
Contributor

@JoaoJandre JoaoJandre commented Mar 13, 2024

Description

For both the listVirtualMachines and listVirtualMachineMetrics APIs, by default, ACS queries all VM details (the details parameter is default all) and consequently queries the cloud.vm_stats table for each VM, creating a summary of statistics. Therefore, every process carried out in the UI that uses the listVirtualMachines API also loads the VM statistics, which can cause slowdowns depending on the number of records.

A new configuration return.vm.stats.on.vm.list (with default true), was added, when it is false, the listVirtualMachines API will not list the VM's metrics by default.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

How Has This Been Tested?

Called listVirtualMachines API, without any parameters:

  • before this PR, the all the VM's details were returned;
  • with this PR and return.vm.stats.on.vm.list as true (default), all the details were returned as well;
  • with this PR and return.vm.stats.on.vm.list as false, all the details except the metrics were returned.

@codecov
Copy link

codecov bot commented Mar 13, 2024

Codecov Report

Attention: Patch coverage is 47.82609% with 12 lines in your changes are missing coverage. Please review.

Project coverage is 30.93%. Comparing base (a7ec873) to head (c711a39).
Report is 7 commits behind head on 4.19.

Files Patch % Lines
...che/cloudstack/api/command/user/vm/ListVMsCmd.java 41.66% 6 Missing and 1 partial ⚠️
...a/org/apache/cloudstack/api/ListVMsMetricsCmd.java 33.33% 1 Missing and 1 partial ⚠️
...apache/cloudstack/api/response/UserVmResponse.java 80.00% 1 Missing ⚠️
ui/src/config/section/compute.js 0.00% 1 Missing ⚠️
ui/src/views/AutogenView.vue 0.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##               4.19    #8782   +/-   ##
=========================================
  Coverage     30.93%   30.93%           
- Complexity    34263    34276   +13     
=========================================
  Files          5353     5354    +1     
  Lines        376055   376100   +45     
  Branches      54691    54697    +6     
=========================================
+ Hits         116317   116346   +29     
- Misses       244443   244453   +10     
- Partials      15295    15301    +6     
Flag Coverage Δ
simulator-marvin-tests 24.76% <52.38%> (+<0.01%) ⬆️
uitests 4.39% <0.00%> (-0.01%) ⬇️
unit-tests 16.58% <0.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@weizhouapache
Copy link
Member

@JoaoJandre
I suggest to check/fix the API calls from UI to list vms or vm metrics.

It is not good to change the default behavior if it is not a bug, IMHO.

@rohityadavcloud
Copy link
Member

For both the listVirtualMachines and listVirtualMachineMetrics APIs, by default, ACS queries all VM details (the details parameter is default all) and consequently queries the cloud.vm_stats table for each VM,

This is because the listVirtualMachinesMetrics API (and all Metrics API) are superset of the non-metrics APIs. All the Metrics APIs add something related to metrics in their API response in addition to what a non-metrics API would return. I think we shouldn't change the default behaviour of the non-metrics API - and if you really have a use-case for this see if there's any other way to do what you're trying to accomplish or worst-case add a global setting that allows the behaviour you want but keep the setting's default in such a way that it continue old behaviour for other users.

@JoaoJandre
Copy link
Contributor Author

@JoaoJandre I suggest to check/fix the API calls from UI to list vms or vm metrics.

It is not good to change the default behavior if it is not a bug, IMHO.

@weizhouapache, @sureshanaparti and @rohityadavcloud, we could change only the UI behavior; however, why do we have this default behavior?
We do not use these metrics that are returned on this API, when the metrics are used, users, operators and the UI use the listVirtualMachineMetrics API. That is, we already have an API that has this exact purpose.

Not long ago we changed the default behavior of the deleteTemplate API on #7731; As with this situation, the one on #7731 wasn't a bug, it was an inconvenient behavior. A user/operator that only wants to list the VMs intuitively will use the listVirtualMachines API; however, depending on the environment's size, he will deal with slowness because of the default behavior. As with #7731, we could only a change the UI, but changing the default behavior makes more sense.

@JoaoJandre JoaoJandre marked this pull request as ready for review May 3, 2024 15:02
@codecov-commenter
Copy link

codecov-commenter commented May 3, 2024

Codecov Report

Attention: Patch coverage is 3.22581% with 30 lines in your changes are missing coverage. Please review.

Project coverage is 14.96%. Comparing base (a7ec873) to head (e0f7462).
Report is 109 commits behind head on 4.19.

Files Patch % Lines
...che/cloudstack/api/command/user/vm/ListVMsCmd.java 0.00% 17 Missing ⚠️
...apache/cloudstack/api/response/UserVmResponse.java 0.00% 8 Missing ⚠️
...a/org/apache/cloudstack/api/ListVMsMetricsCmd.java 0.00% 4 Missing ⚠️
...n/java/com/cloud/api/query/ViewResponseHelper.java 0.00% 1 Missing ⚠️
Additional details and impacted files
@@              Coverage Diff              @@
##               4.19    #8782       +/-   ##
=============================================
- Coverage     30.93%   14.96%   -15.97%     
+ Complexity    34263    10989    -23274     
=============================================
  Files          5353     5373       +20     
  Lines        376055   469034    +92979     
  Branches      54691    57597     +2906     
=============================================
- Hits         116317    70177    -46140     
- Misses       244443   391087   +146644     
+ Partials      15295     7770     -7525     
Flag Coverage Δ
simulator-marvin-tests ?
uitests 4.31% <ø> (-0.08%) ⬇️
unit-tests ?
unittests 15.66% <3.22%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@JoaoJandre
Copy link
Contributor Author

@blueorangutan package

@blueorangutan
Copy link

@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 9549

@JoaoJandre
Copy link
Contributor Author

@rohityadavcloud , @weizhouapache I added a configuration to control the behavior of the listVirtualMachines API and updated the description. Now the default behavior is maintained. I hope we can advance the discussion on #8970 so we can create a proper mechanism to change the default behavior in the future.

@weizhouapache
Copy link
Member

@blueorangutan test rocky8 kvm-rocky8

@blueorangutan
Copy link

@weizhouapache a [SL] Trillian-Jenkins test job (rocky8 mgmt + kvm-rocky8) has been kicked to run smoke tests

@weizhouapache
Copy link
Member

@rohityadavcloud , @weizhouapache I added a configuration to control the behavior of the listVirtualMachines API and updated the description. Now the default behavior is maintained. I hope we can advance the discussion on #8970 so we can create a proper mechanism to change the default behavior in the future.

thanks @JoaoJandre for the update

@blueorangutan
Copy link

[SF] Trillian test result (tid-10170)
Environment: kvm-rocky8 (x2), Advanced Networking with Mgmt server r8
Total time taken: 49325 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr8782-t10170-kvm-rocky8.zip
Smoke tests completed. 129 look OK, 2 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_events_resource Error 418.29 test_events_resource.py
test_01_restore_vm Error 0.24 test_restore_vm.py
test_02_restore_vm_allocated_root Error 0.18 test_restore_vm.py
ContextSuite context=TestRestoreVM>:teardown Error 1.26 test_restore_vm.py

@rohityadavcloud
Copy link
Member

I like the idea of splitting which API is called for metrics vs non-metrics list view (I might or others should steal the pattern for all metrics API usage across the UI). I'm not fully satisfied with the PR @JoaoJandre yet, and I would rather encourage you can pick some ideas from #8985 - that said, I'm also inclined to make progress in a cordial and mature manner. It would be easier to get this merge and optimise the general solution further as required (by me or others).

I'm working on a much wider PR that's in research and progress, that's around wider scalability issues of CloudStack (surprisingly I'm near the root cause, and it maybe possible to even get stats without much penalty). All that said - I wouldn't remember everything I write on Github on each and every PR, take my review with a pinch of salt and I may change my views on things as I'm dealing with a wider scalability problem. I'll leave some comments, but think let's go ahead.

Copy link
Member

@rohityadavcloud rohityadavcloud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some remarks, otherwise LGTM

@rohityadavcloud rohityadavcloud added this to the 4.19.1.0 milestone May 8, 2024
@DaanHoogland
Copy link
Contributor

@sureshanaparti ok by you like this?

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✖️ debian ✔️ suse15. SL-JID 9631

@DaanHoogland
Copy link
Contributor

ping @sureshanaparti

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 9663

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-10254)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 43718 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr8782-t10254-kvm-centos7.zip
Smoke tests completed. 130 look OK, 1 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_events_resource Error 416.93 test_events_resource.py

Copy link
Contributor

@DaanHoogland DaanHoogland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am ok with this given a short notice in the release notes. (cc @JoaoJandre @sureshanaparti )

@rohityadavcloud rohityadavcloud merged commit 631d6ad into apache:4.19 Jun 5, 2024
dhslove pushed a commit to ablecloud-team/ablestack-cloud that referenced this pull request Jun 17, 2024
* Do not retrieve VM's stats on normal VM listing

* Add config to control the behavior

* address reviews
weizhouapache pushed a commit to shapeblue/cloudstack that referenced this pull request Mar 4, 2025
…Pools (apache#446)

Following changes and improvements have been added:
- Allows configuring connection pool library for database connection. As default, replaces dbcp2 connection pool library with more performant HikariCP.
db.<DATABASE>.connectionPoolLib property can be set in the db.properties to use the desired library.

> Set dbcp for using DBCP2
> Set hikaricp or for using HikariCP

- Improvements in handling of PingRoutingCommand
   
    1. Added global config - `vm.sync.power.state.transitioning`, default value: true, to control syncing of power states for transitioning VMs. This can be set to false to prevent computation of transitioning state VMs.
    2. Improved VirtualMachinePowerStateSync to allow power state sync for host VMs in a batch
    3. Optimized scanning stalled VMs

- Added option to set worker threads for capacity calculation using config - `capacity.calculate.workers`

- Added caching framework based on Caffeine in-memory caching library, https://github.com/ben-manes/caffeine

- Added caching for dynamic config keys with expiration after write set to 30 seconds.

- Added caching for account/use role API access with expiration after write can be configured using config - `dynamic.apichecker.cache.period`. If set to zero then there will be no caching. Default is 0.

- Added caching for some recurring DB retrievals
   
    1. CapacityManager - listing service offerings - beneficial in host capacity calculation
    2. LibvirtServerDiscoverer existing host for the cluster - beneficial for host joins
    3. DownloadListener - hypervisors for zone - beneficial for host joins
    5. VirtualMachineManagerImpl - VMs in progress- beneficial for processing stalled VMs during PingRoutingCommands
    
- Optimized MS list retrieval for agent connect 

- Optimize finding ready systemvm template for zone

- Database retrieval optimisations - fix and refactor for cases where only IDs or counts are used mainly for hosts and other infra entities. Also similar cases for VMs and other entities related to host concerning background tasks

- Changes in agent-agentmanager connection with NIO client-server classes
   
    1. Optimized the use of the executor service
    2. Refactore Agent class to better handle connections.
    3. Do SSL handshakes within worker threads
    5. Added global configs to control the behaviour depending on the infra. SSL handshake and initial processing of a new agent could be a bottleneck during agent connections. Configs - `agent.max.concurrent.new.connections` can be used to control number of new connections management server handles at a time. `agent.ssl.handshake.timeout` can be used to set number of seconds after which SSL handshake times out at MS end.
    6. On agent side backoff and sslhandshake timeout can be controlled by agent properties. `backoff.seconds` and `ssl.handshake.timeout` properties can be used.

- Improvements in StatsCollection - minimize DB retrievals.

- Improvements in DeploymentPlanner allow for the retrieval of only desired host fields and fewer retrievals.

- Improvements in hosts connection for a storage pool. Added config - `storage.pool.host.connect.workers` to control the number of worker threads that can be used to connect hosts to a storage pool. Worker thread approach is followed currently only for NFS and ScaleIO pools.

- Minor improvements in resource limit calculations wrt DB retrievals

### Schema changes
Schema changes that need to be applied if updating from 4.18.1.x
[FR73B-Phase1-sql-changes.sql.txt](https://github.com/user-attachments/files/17485581/FR73B-Phase1-sql-changes.sql.txt)


Upstream PR: apache#9840

### Changes and details from scoping phase
<details>
<summary>Changes and details from scoping phase</summary>


FR73B isn't a traditional feature FR per-se and the only way to scope this is we find class of problems and try to put them in buckets and propose a time-bound phase of developing and delivering optimisations. Instead of specific proposal on how to fix them, we're looking to find approaches and methodologies that can be applied as sprints (or short investigation/fix cycles) as well as split and do well-defined problem as separate FRs.

Below are some examples of the type of problem we can find around resource contention or spikes (where resource can be CPU, RAM, DB):

- Resources spikes on management server start/restart (such as maintenance led restarts)
- Resource spikes on addition of Hosts
- Resource spikes on deploying VMs
- Resource spikes or slowness on running list APIs

As an examples, the following issues were found during the scoping exercise:

### 1. Reduce CPU and DB spikes on adding hosts or restarting mgmt server (direct agents, such as Simulator)

Introduced in apache#1403 this gates the logic only to XenServer where this would at all run. The specific code is only applicable for XenServer and SolidFire (https://youtu.be/YQ3pBeL-WaA?si=ed_gT_A8lZYJiEh.

Hotspot took away about 20-40% CPU & DB pressures alone:

<img width="1002" alt="Screenshot 2024-05-03 at 3 10 13 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/f7f86c44-f865-4734-a6fd-89bd6a85ab73">

<img width="1067" alt="Screenshot 2024-05-03 at 3 11 41 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/caa5081b-8fd6-46cd-acb1-f4c5d6b5d10f">

**After the fix:**

![Screenshot 2024-05-03 at 5 31 05 PM](https://github.com/shapeblue/cloudstack-apple/assets/95203/2ba0b1c9-9922-44a9-ae4f-fb65f77866d4)

### 2. Reduce DB load on capacity scans

Another type of code/programming pattern wherein, we fetch all DB records only to count them and discard them. Such refactoring can reduce CPU/DB load for env with really large hosts. The common pattern in code to search is to optimise of list of hosts/hostVOs. DB hot-spot reduced by ~5-13% during aggressive scans.

### 3. Reduce DB load on Ping command

Upon handling Ping commands, we try to fetch whole bunch of columns from the vm_instance (joined to other) table(s), but only use the `id` column. We can optimise and reduce DB load by only fetching the `id`. Further optimise how power reports are handled (for example, previously it calls DB query and then used an iterator -> which was optimised as doing a select query excluding list of VM ids). 

With 1,2,3, single management server host + simulator deployed against single MySQL 8.x DB was found to do upto 20k hosts across two cluster.

### 4. API and UI optimisation

In this type of issues, the metrics API for zone and cluster were optimised, so the pages would load faster. This sort of thing may be possible across the UI, for resources that are very high in number.

### 5. Log optimisations

Reducing (unnecessary) logging can improve anything b/w 5-10% improving in overall performance throughput (API or operational)

### 6. DB, SQL Query and Mgmt server CPU load Optimisations

Several optimisations were possible, as an example, this was improved wherein `isZoneReady` was causing both DB scans/load and CPU hotspot:

<img width="1314" alt="Screenshot 2024-05-04 at 9 19 33 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/b0749642-0819-4bb9-803a-faa9754ccefa">

The following were explored:

- Using mysql slow-query logging along with index scan logging to find hotspot, along with jprofiler
- Adding missing indexes to speed up queries
- Reduce table scans by optimising sql query and using indexes
- Optimising sql queries to remove duplicate rows (use of distinct)
- Reduce CPU and DB load by using jprofiler to optimise both sql query
  and CPU hotspots

Example fix:

server: reduce CPU and DB load caused by systemvm ::isZoneReady()
For this case, the sql query was fetching large number of table scans
only to determine if zone has any available pool+host to launch
systemvms. Accodingly the code and sql queries along with indexes
optimisations were used to lower both DB scans and mgmt server CPU load.

Further, tools such as EXPLAIN or EXAMPLE ANALYZE or visual explaining of queries can help optimise queries; for example, before:

<img width="508" alt="Screenshot 2024-05-08 at 6 16 17 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/d85f4d19-36a2-41ee-9334-c119a4b2fc52">

After adding an index:

<img width="558" alt="Screenshot 2024-05-08 at 6 22 32 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/14ef3d13-2d25-4f41-ba25-ee68e37b5b76">

Here's a bigger view of the user_vm_view that's optimised against by adding an index to user_ip_address table:

![zzexplain](https://github.com/shapeblue/cloudstack-apple/assets/95203/72e44291-a657-49da-adcd-5803a2fa91f9)

### 7. Better DB Connection Pooling: HikariCP

Several CPU and DB hotspots suggested about 20+% of time was spent to process `SELECT 1` query, which was found later wasn't necessary for JDBC 4 compliant drivers that would use a Connection::isValid to ascertain if a connection was good enough. Further, heap and GC spikes are seen due to load on mgmt server with 50k hosts. By replacing the dbcp2 based library with a more performant library with low-production overhead HikariCP, it was found the application head/GC load and DB CPU/Query load could be reduced further. For existing environments, the validation query can be set to `/* ping */ SELECT 1` which performance a lower-overhead application ping b/w mgmt server and DB.

Migration to HikariCP and changes shows lower number of select query load, and about 10-15% lower cpu load:

<img width="1071" alt="Screenshot 2024-05-09 at 10 56 09 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/5dbf919e-4d15-48a3-ab87-5647db666132">
<img width="372" alt="Screenshot 2024-05-09 at 10 58 40 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/9cfc80c6-eb91-4036-b7f2-1e24b6c5b78a">

Caveat: this has led to unit test failures, as many dependent on dbcp2 based assumptions, which can be fixed in due time. However, build is passing and a simulator based test-setup seems to be working. The following is telemetry of the application (mgmt server), after 50k hosts join:

<img width="1184" alt="Screenshot 2024-05-10 at 12 31 09 AM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/e47cd71e-2bae-4640-949c-a457c420ab70">
<img width="1188" alt="Screenshot 2024-05-10 at 12 31 26 AM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/33dec07b-834c-44b8-a9a4-1d7502973fc7">

For 100k hosts added/joining, the connection scaling seems more better:

<img width="1180" alt="Screenshot 2024-05-22 at 8 32 44 PM" src="https://github.com/shapeblue/cloudstack-apple/assets/95203/ee4d3c5d-4b6d-43f0-8efb-28aba64917d9">

### 8. Using MySQL slow logs to optimise application logic and queries

Using MySQL slow logs, using the following configuration was enabled:

```
slow_query_log		= 1
slow_query_log_file	= /var/log/mysql/mysql-slow.log
long_query_time = 1
log_queries_not_using_indexes = 1
min_examined_row_limit = 100
```

Upon analysing the slow logs, network_offering and user_vm_views related view and query & application logic for example were optimised to demonstrate how the methodology can be used to measure, find and optimise bottlenecks. It was found that queries that end up doing more table scans than the rows they returned to application (ACS mgmt server), were adding pressure on the db.

- In case of network_offering_view adding an index reduced table scans.
- In case of user_vm_view, it was found that MySQL was picking the wrong index that caused a lot of scans as many IPs addresses were there in the user_ip_address table. It turned to be related or same as an old MySQL server bug https://bugs.mysql.com/bug.php?id=41220 and the workaround fix was to force the relevant index. This speed up listVirtualMachines API in my test env (with 50-100k hosts) from 17s to under 200ms (measured locally).

### 9. Bottlenecks identified and categorised

As part of the FR scoping effort, not everything could be possibly fixed, as an example, some of the code has been marked with FIXME or TODO that relate to hotspots discovered during the profiling process. Some of which was commented, to for example speed up host additions while reduce CPU/DB load (to allow testing of 50k-100k hosts joining).

Such code can be further optimised by exploring and using new caching layer(s) that could be built using Caffein library and Hazelcast.

Misc: if distributed multi-primary MySQL cluster support is to be explored:
shapeblue/cloudstack-apple#437

Misc: list API optimisations may be worth back porting:
apache#9177
apache#8782

</details>

---------

Signed-off-by: Rohit Yadav <[email protected]>
Signed-off-by: Abhishek Kumar <[email protected]>
Co-authored-by: Abhishek Kumar <[email protected]>
Co-authored-by: Fabricio Duarte <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

No open projects
Status: Done

Development

Successfully merging this pull request may close these issues.

7 participants