Use Storage for Tracking Transactional Layers#10744
Use Storage for Tracking Transactional Layers#10744shawntabrizi wants to merge 15 commits intomasterfrom
Conversation
|
Probably we should rename this from "transactional" which is just super confusing. Maybe something more explicit like |
|
I still don't get why we want this. Shouldn't the transactional overhead incurred be part of the worst case benchmark. Meaning: Isn't the benchmark for an dispatchable already choosing the path that makes use of the most storage layers. So why do we need to track and cap it? |
|
Number of storage layers is not something that is deterministic at the moment for a particular extrinsic. For example batch_all. This limit can be used as a hint for the benchmarking to know how many layers to emulate for calculating weight. |
|
So this is to guard against a case where a storage heavy dispatchable is called with a lot of layers already created? |
|
Yes, basically to know ahead of time the worst case number of layers that we could encounter, and potentially take care of this in weight calculation. Otherwise, we would have no idea, and we would need to assume worst case scenario, which would be like infinite. Also, by default, we will make the number of layers 1. |
Maybe you can walk me through has this is supposed to work? Let's assume you are |
This reverts commit 5ae29f7.
ggwpez
left a comment
There was a problem hiding this comment.
I know it's WIP code but I think the comments still apply; was just curious.
| #(#attrs)* | ||
| #vis #sig { | ||
| use #crate_::storage::{execute_with_storage_layer, has_storage_layer}; | ||
| if has_storage_layer() { |
There was a problem hiding this comment.
So this prevents recursive storage layers?
Was that not the whole point of having a limit?
There was a problem hiding this comment.
This would be an entirely different approach to storage layers, where rather than spawning new transactional layers per thing, we would just spawn at most one.
This would be the thing we would probably use by default across all pallet extrinsics.
| thread_local! { | ||
| static TRANSACTION_LEVEL: RefCell<u32> = RefCell::new(0); | ||
| pub fn get_storage_layer() -> Layer { | ||
| crate::storage::unhashed::get_or_default::<Layer>(STORAGE_LAYER_KEY) |
There was a problem hiding this comment.
I'm not sure if there could be an optimization that we always return default for the top call in the call-stack, since the first call always has default we could get rid of the storage access there.
There was a problem hiding this comment.
how do we know we are in a top call?
In any case, this is not a "real storage read" since it happens in memory always
| return Err(()) | ||
| } | ||
| // Cannot overflow because of check above. | ||
| set_storage_layer(&(existing_levels + 1)); |
There was a problem hiding this comment.
If the get/set ends up wasting resources, we could add an Increment(key, limit) function to ParityDB.
Sql servers have this to reduce the number of queries.
There was a problem hiding this comment.
this never gets written to DB, and thus is always in memory. I will let you see if you can rationalize why
| fn dec_storage_layer() { | ||
| let existing_levels = get_storage_layer(); | ||
| if existing_levels == 0 { | ||
| log::warn!( |
There was a problem hiding this comment.
Debug assert maybe?
When we extend the defensive_trait from @kianenigma we could have a proper solution here.
| thread_local! { | ||
| static TRANSACTION_LEVEL: RefCell<u32> = RefCell::new(0); | ||
| pub fn get_storage_layer() -> Layer { | ||
| crate::storage::unhashed::get_or_default::<Layer>(STORAGE_LAYER_KEY) |
There was a problem hiding this comment.
You can debug assert that: if a value is read, it must be != 0.
| } | ||
|
|
||
| pub fn has_storage_layer() -> bool { | ||
| get_storage_layer() > 0 |
There was a problem hiding this comment.
This reduces to has_key(STORAGE_LAYER_KEY) as the storage layer value in the DB should never be 0.
There was a problem hiding this comment.
has_key and get is the same complexity except has_key also needs to decode the storage into the right type. In this case, a u8 is trivial
| require_transaction(); | ||
| TransactionOutcome::Rollback(()) | ||
| }); | ||
| assert_ok!(execute_with_storage_layer(u8::MAX, || -> DispatchResult { |
There was a problem hiding this comment.
You can use assert_storage_noop! here and probably in others (eg require_transaction_should_return_error) as well.
There was a problem hiding this comment.
i am not asserting a storage noop here though. I could modify storage in this call and it would still be valid. the test here is that it should return ok, and that within the function, the storage layer should exist.
|
Hey, is anyone still working on this? Due to the inactivity this issue has been automatically marked as stale. It will be closed if no further activity occurs. Thank you for your contributions. |
| use syn::{parse::Parse, ItemFn, LitInt, Result}; | ||
|
|
||
| struct StorageLayerLimit { | ||
| limit: u8, |
There was a problem hiding this comment.
why is this not a tuple struct?
kianenigma
left a comment
There was a problem hiding this comment.
Is there any written note on what should change about this PR?
|
I updated the comment to include the specification issue here: #10806 Sorry about that |
| log::warn!( | ||
| "We are underflowing with calculating transactional levels. Not great, but let's not panic...", | ||
| ); |
There was a problem hiding this comment.
| log::warn!( | |
| "We are underflowing with calculating transactional levels. Not great, but let's not panic...", | |
| ); | |
| frame_support::defensive!( | |
| "We are underflowing with calculating transactional levels. Not great, but let's not panic...", | |
| ); |
| (ConsumerRemaining, ConsumerRemaining) | | ||
| (NoProviders, NoProviders) => true, | ||
| (NoProviders, NoProviders) | | ||
| (TooManyConsumers, TooManyConsumers) | |
There was a problem hiding this comment.
wish there would've been compiler warnings for not covering these..
|
breaking this down into parts, starting with: |
In progress...
Specification: #10806