Skip to content

Conversation

@linh2931
Copy link
Contributor

This PR implements EOSVM-OC support for sync calls. It unifies apply() and do_sync_call() into a single execute(), and in turn unifying EOSVM/EOSVM-JIT and EOSOC processing, making code simpler. Considerable refactoring is performed as a result.

In OC implementation itself, most changes are around sync_call entry point handling.

Resolves #1043.

@linh2931 linh2931 requested review from heifner and spoonincode April 15, 2025 18:33
@linh2931 linh2931 marked this pull request as draft April 15, 2025 18:34
int64_t(*call_func)(uint64_t, uint64_t, uint32_t) = (int64_t(*)(uint64_t, uint64_t, uint32_t))(cb->running_code_base + *code.call_offset);
call_func(context.get_sender().to_uint64_t(), context.get_receiver().to_uint64_t(), static_cast<uint32_t>(static_cast<eosio::chain::sync_call_context&>(context).data.size()));
} else {
status = eosio::chain::execution_status::receiver_not_support_sync_call;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How to check sync_call entry point signature?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo we ought to not need to. The validity of the sync_call export doesn't need to be tracked and checked by each individual wasm runtime in a bespoke way -- we ought to be able to that in a single common location.

Unfortunately it seems we cannot use the instantiation cache as is, since OC can bypass that.

We could use the code index but perhaps that is considered improper. (that said, the code index already tracks non-protocol aspects of code like a reference count!)

I don't have other ideas at the moment.

Copy link
Contributor Author

@linh2931 linh2931 Apr 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created an issue #1392

@linh2931 linh2931 marked this pull request as ready for review April 15, 2025 22:51
: cc(data_dir, eosvmoc_config, db), exec(cc), mem(wasm_constraints::maximum_linear_memory/wasm_constraints::wasm_page_size) {
: cc(data_dir, eosvmoc_config, db) {
for (auto i = 0; i < 4; ++i) {
exec.emplace_back(std::make_unique<eosvmoc::executor>(cc));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally every executor in a thread shares the same resources. Not really sure best approach -- an executor could expose a clone() call to return a new executor sharing the same resources, or maybe there needs to be more refactoring such as a class that holds executor's resources that is just passed to a non-member execute() call (having trouble coming up with names for these things off hand).

I don't think we need to do this executor improvement immediately though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created an improvement issue for this #1394

thread_local std::unique_ptr<eosvmoc::memory> eosvmoc_runtime::mem_thread_local{};
thread_local std::vector<std::unique_ptr<eosvmoc::executor>> eosvmoc_runtime::exec_thread_local{};
thread_local std::vector<std::unique_ptr<eosvmoc::memory>> eosvmoc_runtime::mem_thread_local{};
thread_local uint32_t eosvmoc_runtime::ro_thread_index = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need the final stack approach before considering ready for review

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to make the wasm allocator pool as a templated class and use it here. Will retarget this PR on
#1346 to get it going.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

BOOST_REQUIRE_NO_THROW(t.push_action("caller"_n, "callwithinpt"_n, "caller"_n, mvo()("input", 3)));
return;
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to get the memory situation handled before ready for full review.

Currently the main thread memory is created with full 528 slices, and the RO thread memories are created with 10 slices.

For the simplest approach to get it floating, I'd suggest maybe just making every sync call use memory with a single slice.

Ideally we do something more complex where shallow sync calls get more slices than deeper sync calls (under the premise that shallow calls are more frequent). But we'd need to work out the math on this first.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done with a single slice for now. Created an issue #1393.

@linh2931 linh2931 marked this pull request as draft April 16, 2025 12:18
@linh2931 linh2931 changed the base branch from sync_call to wasm_allocs_pool April 16, 2025 12:33
Base automatically changed from wasm_allocs_pool to sync_call April 17, 2025 17:08
size_t code_begin;
eosvmoc_optional_offset_or_import_t start;
unsigned apply_offset;
std::optional<unsigned> call_offset;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this might cause a problem for existing code caches that may or may not be properly handled on start up. Changing the code_descriptor layout probably requires indicating something higher up (like at the header) that the format has been changed and the cache must be destroyed and recreated at start up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would the new version of current_codegen_version (line 48 below) be sufficient?

static constexpr uint8_t current_codegen_version = 3;

if(cd.codegen_version != current_codegen_version) {
allocator->deallocate(code_mapping + cd.code_begin);
allocator->deallocate(code_mapping + cd.initdata_begin);
continue;
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I don't believe that will help. The serialized format of code_descriptor has changed, and it isn't versioned. I think you need to change

static constexpr uint64_t header_id = 0x32434f4d56534f45ULL; //"EOSVMOC2" little endian

So that the cache file is completely removed and recreated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I missed that.

…ting code cache will be destroyed and recreated at startup
static constexpr size_t header_size = 512u;
static constexpr size_t total_header_size = header_offset + header_size;
static constexpr uint64_t header_id = 0x32434f4d56534f45ULL; //"EOSVMOC2" little endian
static constexpr uint64_t header_id = 0x33434f4d56534f45ULL; //"EOSVMOC2" little endian
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change the comment too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry about that.

@linh2931 linh2931 merged commit 85d7b51 into sync_call May 7, 2025
28 checks passed
@linh2931 linh2931 deleted the sync_call_on_oc branch May 7, 2025 21:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants