From liorbrown at outlook.co.il Wed Feb 18 10:59:33 2026 From: liorbrown at outlook.co.il (=?windows-1255?B?7Ong5fgg4fjg5e8=?=) Date: Wed, 18 Feb 2026 10:59:33 +0000 Subject: [squid-dev] Architectural inquiry: Design rationale behind serialized request handling for identical URLs Message-ID: Hello, My name is Lior Brown, a research assistant from Ariel University, and contributer of squid-cache. I am currently working on a research project involving request dispatching and peer selection within the Squid-cache core. During my development, I have encountered several mechanisms and logic blocks within the source code that deliberately prevent or queue parallel HTTP requests for the same URL, effectively serializing them. I would like to understand the fundamental design rationale behind these restrictions. Specifically: Are these blocks in place due to specific architectural constraints (such as memory management or Store Entry state transitions)? Are there known side effects or risks I should be aware of if I attempt to implement parallel requesting for identical objects in a research environment? I want to ensure I fully understand the system's design philosophy before proceeding with any modifications. Thank you for your time and insights. best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From rousskov at measurement-factory.com Wed Feb 18 16:53:30 2026 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Wed, 18 Feb 2026 11:53:30 -0500 Subject: [squid-dev] Architectural inquiry: Design rationale behind serialized request handling for identical URLs In-Reply-To: References: Message-ID: <6c0b53f8-7e71-4076-8f05-3eaffea872b7@measurement-factory.com> On 2026-02-18 05:59, ????? ????? wrote: > I am currently working on a research project involving request > dispatching and peer selection within the Squid-cache core. > > During my development, I have encountered several mechanisms and logic > blocks within the source code that deliberately prevent or queue > parallel HTTP requests for the same URL, effectively serializing them. I do not think Squid does that by default. Either you are misinterpreting Squid code or you are looking at code related to collapsed_forwarding feature (see squid.conf.documented). That feature is off by default (and it queues but does not serialize parallel requests). I recommend providing at least one specific code example to illustrate/substantiate your claim. HTH, Alex. > I > would like to understand the fundamental design rationale behind these > restrictions. Specifically: Are these blocks in place due to specific > architectural constraints (such as memory management or Store Entry > state transitions)? Are there known side effects or risks I should be > aware of if I attempt to implement parallel requesting for identical > objects in a research environment? I want to ensure I fully understand > the system's design philosophy before proceeding with any modifications. > Thank you for your time and insights.