[BLOCKED/WIP] performance: Add batch InjectApplicationMessages to support high throughput#2222
[BLOCKED/WIP] performance: Add batch InjectApplicationMessages to support high throughput#2222RicoSuter wants to merge 1 commit intodotnet:masterfrom
Conversation
|
@dotnet-policy-service agree |
| public Task<MqttPacket> WaitAsync() | ||
| { | ||
| // Lazy initialization - only allocate when actually needed | ||
| _promise ??= new AsyncTaskCompletionSource<MqttPacket>(); |
There was a problem hiding this comment.
Do we may need to lock here? This method is exposed externally so that it might happen that the promise gets allocated twice. Probably a Lazy<> will help here. But I don't know if the Lazy will introduce new overhead which mitigates the initial optimization.
|
I had a look at your changes. The idea behind this sounds reasonable. But It still need to investigate a little more because several different optimizations are made. For example the fact that the QoS 0 packet is reused makes sense to me but I wonder if the injection of multiple application messages is an important strategy to apply. |
|
@chkr1011 thanks for having a look... this PR is just a part of bigger investigation I'm doing for high-throughput, I'm still working on a PR but I think it's too big to be accepted here, you can have a look here: RicoSuter#1 Need to split it up into smaller chunks so it is acceptable here... first PR is a no-brainer fix: #2235 |
This PR is a proposal to add an API to do batched message publish on the server side (
InjectApplicationMessages). It’s the only way I can get my benchmark working with my “digital twin” synchronization library.Am I doing something fundamentally wrong or would you consider merging this proposal?
I think we should try to simplify the code (e.g. single-methood calls into batch method) and add tests before merge. Also the other optimizations (subscription cache, promise cache) are nice to have but probably not game changing (can do more benchmarks without).
The MQTT PR where the only change is using single vs batched API can be found here:
https://github.com/RicoSuter/Namotion.Interceptor/pull/108/files#diff-192178ef0628084daa8939a2268ae10733b0426da036edfb4e9d4048199fd8e3R206
Benchmarks:
(Tested on Mac Book Pro Max M4)
With batch APIs:
Baseline (current NuGet version with non-batch APIs):
This benchmark can easily be tried out:
Clone the PR from the other repo, and switch to project references in Namotion.Interceptor.Mqtt:
RicoSuter/Namotion.Interceptor#108