Theses reports spreads other several years and are all about HTTP Smuggling issues
(HTTP Requests or Responses splitting, Cache Poisoning, Security filter bypass).
I’ve made reports on a wide range of open source projects, explaining
the (not always easy) problems to the various security maintainers and testing the fixs.
The starting point for this work was the 2005 work published by Amit Klein and some others:
And also the works of James Kettle on HTTP Host headers “Practical HTTP Host header attacks (Absolute uri in host headers)”
https://www.skeletonscribe.net/2013/05/practical-http-host-header-attacks.html
and, later, his work on ESI server or pingbacks and cache attacks or Pratical Web Cache Poisoning.
In 2015, Starting from these past studies, I studied Apache,Nginx,Varnish source code, I discovered
that a lot of smuggling problems were still present, found new ones based on overflows for the size
attributes (previous works were mostly based on doubling length information) and expanded my works on
Golang,Nodejs,pound,HaProxy,Jetty,Tomcat,Apache Traffic Server…
I sometime had to push for disclosure of fixed vulnerabilitie (Varnish 3) via bugtraq.
But in most of the case it’s been a matter a patience – the long time between reports and fixes
ha also something to deal with lazyness on my side as security is not the biggest part of my job –
as most of the fix implies updates on HTTP servers, which is not something as fast as updating a web
application framework. I did not get a security report or a CVE for each reported flaw, especially
on the first years. Smuggling is sometimes hard to explain (and public disclosure policies
are not always liked on HTTP servers dev teams).
The main problem of HTTP smuggling issues is that the final exploitation comes from interactions between different http parsers. If two actors badly interprets the HTTP message or disagree on the right
interpretation then bad things could happen. From the security maintainer point of view it’s sometimes
easy to reject the problem as coming from the others.
It’s also very important to understand that the attacker controls the HTTP message,we do not use HTTP messages from browsers, the attacker injects bad HTTP messages onto servers infrastructures, effects on the users comes later, when the real user HTTP messages reach the infected or shaken servers. Like when you do report a smuggling issue on hackerone reports, they prevent reporters that issues about header injection are not always security issues because we cannot control the user headers. That’s a huge misunderstanding of smuggling payloads.
I’ve made some blog posts explaining details (I still have one awaiting vendor authorization) for some
of the fixed problems.
And I also made a Defcon 24 presentation on 2016. For someone knowing nothing on smuggling
it’s a good starting point (links on next part below).
Note : my work is usually reported with the name ‘regilero’, and sometimes ‘Régis Leroy’.
Tools: HTTPWookiee : https://github.com/regilero/HTTPWookiee : this contains a small subset of the real tests I perform on HTTP servers.
CVE-2017-7656 : HTTP/0.9 Request Smuggling
https://vulners.com/cve/CVE-2017-7656 (score 6.5)
CVE-2017-7657: Transfer-Encoding Request Smuggling
https://vulners.com/cve/CVE-2017-7657 (score 6.5)
CVE-2017-7658: Too Tolerant Parser, Double Content-Length + Transfer-Encoding + Whitespace
https://vulners.com/cve/CVE-2017-7658 (score 6.5)
https://bz.apache.org/bugzilla/show_bug.cgi?id=57832 : Apache issues on ‘socket poisoning’, where we could store HTTP responses on
the reverse proxy by sending extra responses, and mix these response with other users later. Not fixed via a CVE because this behavior
was not considered as a real security issue (it’s a consequence of a successful splitting attack on the backend, or of a compromised backend).
If you ask my opinion this is one of the most problematic issue I found on these 5 years. Fixs were included in 2016 on version 2.4.24.
CVE-2016-8743 : httpd: Apache HTTP Request Parsing Whitespace Defects : problems with CR, FF, VTAB and others strange characeters in parsing HTTP messages
especially the space before colon problem. They were also some HTTP 0.9 downgrades.
This work contributed to the internal dev debates around the HttpProtocolOptions Strict|LenientMethods|Allow0.9 option added on 2.4
CVE-2015-3183 : chunk header attribute truncation (low)
Proxygen is a C++ Open Source library which is the core library for Facebook HTTP related projects
In 2016 I reported several smuggling issues (about doubled headers or bad end of line, for example), via the facebook bounty program #1710044992591113
Pound is an open Source SSL terminator, but the project has not published major changes for a long time, and I experienced difficulties having my reports fixed and delivered to final users.
After reports on 09-2016 a Version 2.8a fixing the flaws was published on 10-2016 but marked as experimental.
Details of the flaws were published in 07-2018. CVE was reserved by myslef on 2018-01. A version 2.8 was published on 2018-05.
Details of issues (double Content Length, chunk prioriy, headers concatenation vuia NULL character, etc.) are published on my blog post https://regilero.github.io/english/security/2018/07/03/security_pound_http_smuggling/
# Varnish
HTTP/0.9 support was also removed after my reports in 2015, but without public disclosure of potential abuse.
Not the project where I had the most success, I do not think any smuggling issue would be considered a security issue.
In 2015 the OpenBSD Http server was very new, crashing on 0.9 requests, I reported some smuggling issues (bad end of line, double Content-Length) which were fixed later.
HaProxy was transmitting some of the very bad request I use to perform splitting attacks on backends (something which is not a security issue, but which allows security issues).
I had various discussions with Willy Tarreau which leaded to some improvments in HaProxy, blocking bad requests before any less robust HTTP parser could read it.
For example:
I think this work allows for more robusts HTTP servers. Some of the very old issues already reported in the 2005 era reports, like double Content Length,
were still widely supported in 2015 and are now harder to find on most open source http servers. I think I contributed greatly to enforce the RFC 7230
anti-smuggling policies (chunk priority, no double content-length) and for the removal of old-rfc dangerous features (like the continuation of headers
with the space prefix, or the HTTP/0.9 support). For this I just had to read the 2005 studies and the RFC, tests the servers, and try to explain
exploitations.
A big part of my added work and reports was studying effects of control characters (\r, \n, NULL, vtab, htab, bell, backspace & formfeed) on various parts of the messages.
With some real good success on vartious project for NULL or for bad enf of lines.
Another big thing was studying the HTTP/0.9 downgrade exploitations (like extracting a valid HTTP message stored in an image from a partial 0.9 response) and
finding new 0.9 downgrade vectors.
Finally another part of this work was finding new attack vectors (truncation of size, overflows, concatenation of strings, effects of cache hit on header parsing, etc).
The last big part of my work was spending a long time explaining the potential attacks to maintainers. If you need hints from people understanding the smuggling attacks
and the implications of the fixed flaws, usually better than the project maintainers, I could give you some names. If you need samples of reports or detailled lab exploitations I could also deliver.
HTTP/2 or TLS are not preventing bad effects of HTTP/1.1 bad parsers (they embed HTTP/1.1 parsers in another layer), nor they could prevent effects of an HTTP/0.9 downgrades.
Every HTTP actors which enforces a more robust protocol parsing prevents chaining effects of smuggling attacks.
So I hope the work I made on the subject had real effects on the ecosystem.
Some of these CVE were already elected for bounties:
For the final user the consequences may be huge:
A massive scale smuggling attack on a big actor (a cloud provider for example) could make a huge DOS.
A more realist usage with a public consequence is a targeted cache poisoning, to inject an XSS.
An advanced usage is the filter bypass usage, where the smuggled requests is usually not even logged. A prefect way of sending requests without notices, so a nice tool for SSRF exploits.