-
-
Notifications
You must be signed in to change notification settings - Fork 539
Respecting max_size on fragments but not on message #1602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This is tested here: websockets/tests/test_protocol.py Lines 387 to 404 in 6304d10
|
There's no built-in way to limit the size of each individual fragment. I'm not convinced that this would be greatly useful in general. If you want it, you can simply do: async for frag in web_skt.recv_streaming():
if len(frag) > 100_000:
raise ValueError("fragment too large")
consume_fragment(frag) |
I'm skeptical because RFC 6455 makes it clear that any intermediary can split or reassemble fragments in any way they want. Enforcing limits at that level seems sketchy. |
Here's a more structured recap of the situation. Legacy implementation Since version 3.2 (exactly 9 years ago!) websockets has had The legacy implementation contained a buffer of messages after reassembling fragmented messages. The maximum size of the buffer was literally The legacy implementation didn't let you access frames with Current implementations The new implementations kept the logic of limiting the size of frames with However, they changed the logic of limiting the size of the buffer with
Challenge We have a conflict here between two goals:
Changing The current implementation fails the second goal in the case of messages with many small fragments, as described in this issue. When I wrote it, I treated fragmentation as a transport-level concern. I considered that the receiver should always be able to handle the full message. Upon further thought, this seems incorrect. For example, if you want to transfer a large file in a WebSocket message (why not?) you can send it chunk-by-chunk at one end and write it to disk chunk-by-chunk at the other end. That should work even if it's a 10GB file and you don't want to hold it in memory. Potential solution I'm considering supporting |
Hello, would you be able to test if #1622 does what you want? |
I was initially unhappy with the concept of overloading |
Yes, I know, I've been hesitating between overloading adding a new argument... I went for overloading because:
|
I don't think this is an asyncio issue/question as I think it's above that level.
What I've got is a websocket receiving and consuming fragments from its peer just as it's indicated to in the docs:
What I don't see (please correct me if I'm missing this) is a way to limit the size of the fragments (so that no peer can use up all of my memory) without also putting a limit on the total message size.
That is, I can certainly set the
max_size
arg inwebsockets.asyncio.server.serve()
but when the sum total of the size of the fragments (that I have already consumed) reaches that limit, then an exception is raised. Since it's a "streaming" mode, seems like the total size of a message could be large.Setting
max_size=None
certainly bypasses that issue, but then I have no way to limit the size of fragments before they (potentially) exhaust my memory.Or, am I missing something?
The text was updated successfully, but these errors were encountered: