Manual requests to a https web-server with “openssl s_client”: do not forget the options “-ign_eof -crlf”

Sometimes you may need to analyze the behavior and responses of a web-server or a REST service to certain requests. And sometimes you are restricted to the command line of a Linux system (e.g. during penetration testing). Then you have to type and send HTTP commands in a direct manner. While this is trivial with telnet and HTTP-commands via n unencrypted connection on port 80, you must use a tool like openssl for HTTPS-servers using TLS tunnels.

A quick search on the Internet will show you that you should be able to use “openssl” on a Linux system in the following form:

openssl s_client -connect YOUR_TARGET_WEB_DOMAIN:443

Or – if you do not want to look at certificates and related CA chains in detail – with an additional option “-quiet”:

openssl s_client -quiet -connect YOUR_TARGET_WEB_DOMAIN:443

YOUR_TARGET_WEB_DOMAIN has to be replaced, of course, by a valid URI. For restricting the encryption to TLS V1.2 you would instead use:

openssl s_client -quiet -tls1_2 -connect YOUR_TARGET_WEB_DOMAIN:443

For some servers an additional option “-ign_eof” can be helpful: This hinders a connection to directly close when an “end of file” [EOF] may be reached (during a response). Meaning: The response will not be shown in some cases. The option “-quiet” triggers a “-ign_eof” behavior implicitly. But keep in mind that this option does not hinder any timeouts on the connection imposed by the server.

A problem with the interactive “openssl s_client” command-line on Linux systems

After entering the above commands at the command prompt of a Linux shell (e.g. bash) you will first see some information regarding the connection handshake and the establishment of the encryption tunnel. Then end up on a line where you can interactively enter HTTP commands. You expect to successfully enter commands in the following way:

We type “GET / HTTP/1.1.” (without the quotes)    =>    we press the “ENTER”-key    =>    we get a new line    =>    we type “Host: YOUR_TARGET_WEB_DOMAIN” (without the quotes)    =>    we press the “ENTER”-key twice.

You may try this sequence with “google.com”. This will work! You get, however, an information that the document has been moved to “www.google.com”. But at the new address everything is working, too.

So far so good! But let us try the given recipe with another domain: www.debian.org. Using the command line within “s_client” then leads to an error:

.....
    Start Time: 1609195616
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK
GET / HTTP/1.1
HTTP/1.1 400 Bad Request
Date: Mon, 28 Dec 2020 22:47:04 GMT
Server: Apache
X-Content-Type-Options: nosniff
X-Frame-Options: sameorigin
Referrer-Policy: no-referrer
X-Xss-Protection: 1
Strict-Transport-Security: max-age=15552000
Content-Length: 291
Connection: close
Content-Type: text/html; charset=iso-8859-1

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
<hr>
<address>Apache Server at www.debian.org Port 443</address>
</body></html>
closed

You do not even get a chance to enter the “Host: …” line!
The same is true for other servers – as the one supporting e.g. one of my own domains “anracon.de”.

A simple trick shows what the correct request format is and that it works …

Let us use a
simple trick with echo and a pipe on the Linux shell:

myself@mytux:~> (echo -ne "GET / HTTP/1.1\r\nHost: www.debian.org\r\n\r\n") | openssl s_client -tls1_2 -quiet  -connect www.debian.org:443

This leads to

depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = www.debian.org
verify return:1
HTTP/1.1 200 OK
Date: Mon, 28 Dec 2020 23:09:01 GMT
Server: Apache
Content-Location: index.en.html
Vary: negotiate,accept-language,Accept-Encoding,cookie
TCN: choice
X-Content-Type-Options: nosniff
X-Frame-Options: sameorigin
Referrer-Policy: no-referrer
X-Xss-Protection: 1
Strict-Transport-Security: max-age=15552000
Upgrade: h2,h2c
Connection: Upgrade
Last-Modified: Sun, 27 Dec 2020 19:27:21 GMT
ETag: "36b1-5b777257b5a41"
Accept-Ranges: bytes
Content-Length: 14001
Cache-Control: max-age=86400
Expires: Tue, 29 Dec 2020 23:09:01 GMT
X-Clacks-Overhead: GNU Terry Pratchett
Content-Type: text/html
Content-Language: en

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  <title>Debian -- The Universal Operating System </title>
...
...
</div>
<!--/UdmComment-->
</div> <!-- end footer -->
</body>
</html>

 
After a timeout we get back our prompt.

So, we see that the server reacts properly for the end of line characters used, namely “\r\n” after “GET / HTTP/1.1” and after “Host: www.debian.org” plus after an empty line (leading to 2 “\r\n” at the end of the request).

But if we this change to

myself@mytux:~> (echo -ne "GET / HTTP/1.1\nHost: www.debian.org\r\n\r\n") | openssl s_client -tls1_2 -quiet  -connect www.debian.org:443

we run into a “HTTP/1.1 400 Bad Request” answer again
(Note the “\n” before “Host: …”)

Obviously, our “openssl s_client” interface works with LF-characters when pressing the “ENTER”-key on the command line. Well, the interface uses the typical Linux/Unix-style for EOLs …

By the way:
Some servers set very short timeouts for receiving all request lines – you may have difficulties with typing fast enough. Then the “trick” with an echo-command and a pipe is very useful to test the server none the less.

What is the reason for the unequal behavior of different servers?

The correct way to build HTTP-Requests is described e.g. in “www.tutorialspoint.com (http/http_requests.htm)“; I quote:

  • A Request-line
  • Zero or more header (General|Request|Entity) fields followed by CRLF
  • An empty line (i.e., a line with nothing preceding the CRLF) indicating the end of the header fields
  • Optionally a message-body

According to RFC7230 a HTTP-Request line has the following format:

request-line = method SP request-target SP HTTP-version CRLF

However, in section 3.5 the named RFC also says:

Although the line terminator for the start-line and header fields is the sequence CRLF, a recipient MAY recognize a single LF as a line terminator and ignore any preceding CR.

So, this explains the different behavior of some web-servers to the commands sent with “openssl s_client”.

What can we do with “openssl s_client” to enforce a CRLF as the end of lines when pressing the “ENTER”-key?

The ”
openssl s_client” has a lot of options. You find an overview at the following URI:
https://zoomadmin.com / HowToLinux / LinuxCommand / s_client
The required option for our purpose is “-crlf“; we thus arrive at:

openssl s_client <strong>-quiet -crlf</strong> -tls1_2 -connect YOUR_TARGET_WEB_DOMAIN:443

as the right way to bring RFC compliant web-servers to answer without error messages to manually sent HTTP requests within “openssl s_client”.

Prepare your terminal for long answers from the server

Some web-sites may present long web pages. The HTTP/HTML code sent to you as an answer may be pretty long. As most Linux terminals limit the scroll-able output length by default you should look out for settings that allow for long or infinite output within the terminal emulation of your choice. KDE’s “konsole” e.g. offers you an option to allow for scrolling output with unlimited length. A noteworthy side-effect is, however, that the contents is saved unencrypted in temporary files (for which you can define a location on your PC). So, be a bit careful when using such options.

Conclusion

Using CRLF is the RFC defined way to properly end HTTP request and header field lines. (… whether we from the Linux side may like this MS influenced definition or not …). To enforce an “openssl s_client” to interpret the signal from an “ENTER”-key as “CRLF” (instead of “LF”) we should use the option “-crlf” when opening “s_client”. The additional options “-ign_eof” or “-quiet” are useful to prevent a shutdown of the connection before the server’s answer is fully displayed.

Links

https://zoomadmin.com/HowToLinux/LinuxCommand/s_client
https://stackoverflow.com/questions/5757290/http-header-line-break-style
https://tools.ietf.org/html/rfc7230#section-3.5
https://www.tutorialspoint.com/http/http_requests.htm
https://www.tutorialspoint.com/http/http_quick_guide.htm

A remarkable experience with a HTTP Error 406 and some security measures of a web-hosting-provider

It is very seldom that you are confronted with a HTTP status message of type 406 “Not acceptable”. However, this happened yesterday to a customer who uses a renowned hosting provider (in Norway) to publish his web-sites. The customer uses his own WordPress installation on hosted web-servers. His favorite browser is Firefox on a Win 10 desktop system. A week ago he could work without any restrictions. Then suddenly everything changed.

Access to website and WP admin interface broken due to security measures of the provider

At some point in time during last week the hosting-provider changed his security policies on his (Norwegian) Apache servers. The provider seems to have at least changed settings of the “mod_security” module – and thereby started to eliminate old browsers by some rules. (Maybe they even introduced the use of the mod_security module for the first time ?). To implement mod-security with a reasonable set of rules basically is a good measure.

However, the effect was that our customer got a 406 error whenever he tried to access his web-site with his Firefox browser. The “406 Not Acceptable” message indicates that a web server cannot or will not (due to some rules) satisfy some conditions in the HTTP GET- or POST-request. Our customer uses the latest version of Firefox. He tested whether he got something similar on a test installation of one of our hosted servers in Germany. Of course not.

A subsequent complaint of our customer was answered by his provider; the answer in a direct translation says:

Contact the Firefox technicians or use Chrome!

Very funny! Our customer asked us for help. We tested the web-servers response with multiple browsers from Linux and Windows desktops. The problem seemed to exist only for Firefox and only on desktop systems. This already indicated a strange server reaction to the HTTP “User-Agent” string.

But this was only part of the strange experience our customer got due to new security measures. In addition his provider enforced the usage of an Apache htaccess password (Basic HTTP user authentication) for all users who maintained their own WordPress installation on the hoster’s web-servers. Our customer suddenly needed to provide a UserId and a password to get access to his WordPress installation’s “wp-admin”-directory. We found out about this intentionally imposed restriction by having a look at the public web site of the provider. There, in a side column, we found a message regarding the new restriction. Customers were asked there to contact the hoster’s specialists for required credentials. Our customer had not been directly informed by the provider about this new policy. So, we just sent the provider a mail and asked him to give us the authentication data to the admin folder of our customer’s WP-installation. We got it one day later via email.

In my opinion these procedures indicate some mess we are facing with improperly handled IT-security activities these days.

Some comments regarding enforced HTTP Basic Authentication for WP’s admin directory

Comment 1: It is, of course, OK to enforce a HTTP password access to directories of a web server. But this is only an effective protection measure if the provider at the same time enforces general TLS/SSL encryption for the access to the hosted web-sites. Otherwise the password would be sent in clear text over the Internet. However, you can still work with a WordPress installation or other CMS-installations on the provider’s web-servers without any SSL certificate. Our customer has a SSL-certificate – but he had to pay for it. Here business interests of the provider obviously collide with real security procedures.

Comment 2: Personally, I regard it as a major mistake to set a common UserID and a fixed permanent password for customers and send
these credentials to a web-admin via an unencrypted email. Ironically enough the provider asked the receiver in the mail to take note of the password and then to destroy the mail. So, mails on the customers mail system are dangerous, but the transfer of an unencrypted mail over at least partially unencrypted Internet lines is not?

Hey, we are not talking about a one time password here – but permanent credentials set and enforced by the provider. The CPanel admin tool offered by the hosting provider does NOT allow for the change of the fixed htaccess password set by the provider’s admins.

Furthermore, why announce this policy on a public website and not inform the customers via a secure channel? Next question: How did they know that we were authorized to request the access data without contacting our customer first ???

The mess with the User-Agent string

Also interesting was the analysis of the Firefox problem. We can demonstrate the effect on the provider’s own website. Here is what you presently (18.10.2019) get when opening the homepage of the provider with Firefox from a Linux desktop:

And here is what you get when you manipulate the User-Agent string a bit:

The blue rectangles have been added not to directly show the provider’s name. Note the 406 error message in the FF developer tool at the bottom!

Well, well … Our customer got the following when opening his own web-page:

Some analysis showed that we get a correct display of the web-site on the same browser if we manipulated the HTTP User-Agent-string for Firefox a bit. One way to do this is offered by the web developer tools of Firefox. However, there are also good plugins to fake the User-Agent string.

The next question was: What part in the User-Agent-string reacted the provider’s Apache servers allergic to?

The standard User-Agent-string of Firefox in a HTTP-GET- or POST-request is defined to have the following structure:

Mozilla/5.0 (platform; rv:geckoversion) Gecko/geckotrail Firefox/firefoxversion

This can be learned from related explanations of mozilla.org:
Firefox User Agent string

“geckotrail” can be an indication of a version or a date. However – quotation:

On Desktop, geckotrail is the fixed string “20100101”

And when we check the User-Agent-string for Firefox on e.g. a Linux desktop we indeed get:

Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko Firefox/68.0

and

Posted in Security und Security-Tools, Web - Browser, HTTP, HTML, CSS | Tagged , , , , ,

Firefox unter Linux – Cache reaktivieren nach Update auf Quantum

In bestimmten Testphasen von Weblösungen stelle ich mit Hilfe von Add-Ons oder auch über die in FF integrierten Developer-Tools öfter den Cache meines Lieblingsbrowsers Browsers ab. Im alten Firefox (etwa der unter Opensuse Leap 42.3 ausgelieferten FF-Version 52.8 ESR) gab es im Add-On “Web Developer” lange Zeit eine Möglichkeit zum dauerhaften Ab- und Anstellen des FF-Caches. Diese Möglichkeit bietet selbiges Add-On unter Quantum nicht mehr.

Bei einem Update auf Firefox Quantum unter Linux hatte sich die vorher gewählte Einstellung zur Deaktivierung des Caches im alten Firefox (Vers. 52.8, ESR) aber irgendwie verselbständigt:

Nach dem Update auf Quantum wurden keine Dateien mehr aus dem Cache gezogen – die Netzanalyse in den FF-eigenen Entwicklertools zeigte ein permanentes Nachladen aller Seitenelemente an. Das führt beim Wechseln zwischen den Seiten einer Web-Site, bei denen größere Hintergrundsbilder angezeigt werden, typischerweise zu spürbaren Verzögerungen und ggf. zu Flacker-Effekten. An diesen Effekten haben wir gemerkt, dass etwas nicht mehr stimmte.

Eigentlich wäre das auch OK – schließlich hatte ich die Einstellung ja selbst gewählt.
Irritierend ist aber, dass man in keinem Entwicklertool irgendwo eine Information zum dauerhaft deaktivierten Cache sehen würde. Mit den in FF integrierten Entwickler-Tools (unter dem Menü “Extras >> Web-Entwickler …” findet man etwa im Bereich der “Netzwerkanalyse” eine Möglichkeit, den Cache temporär ab- und wieder anzuschalten. Eine Abschaltung für den gesamten Zeitraum, in dem man die Entwickler-Toolpalette nutzt, bietet auch der dortige Punkt “Werkzeugkasten-Einstellungen”. Diese Einstellungen sind aber sitespezifisch wie temporär und verschwinden spätestens mit dem Schließen des FF-Developer-Werkzeugkastens.

Leider zeigten die genannten Werkzeuge nach dem Update auf Quantum nicht an, dass der Cache deaktiviert war. War er aber. Wegen der Einstellmöglichkeiten und der zuletzt getroffenen Wahl unter der alten FF-Version.

Die Behebung dieses Problems ist einfach, wenn man weiß, wo man zu suchen hat:

Man öffne die URL “about:config“. Dort gibt man dann im Suchfeld den String “browser.cache.*.enable” ein. In meinem Fall ergab die Suche 5 verschiedene Einstell-Optionen:

browser.cache.disk.enable – browser.cache.disk.smart_size.enabled – browser.cache.memory.enable – browser.cache.offline.enable – browser.cache.offline.insecure.enable

Bei mir waren die Werte für “browser.cache.disk.enable” und “browser.cache.memory.enable” leider auf “false” gesetzt. Man stelle diese Werte auf “true” um – und schon funktioniert der FF-Cache wieder!

Für das temporäre Abschalten nutzt man die Optionen im FF-eigenen Entwickler-Werkzeugkasten. Mir reicht diese Möglichkeit zum temporären Abschalten in der Regel für das Testen von Webseiten.

Leider gibt es hier unter Quantum wohl noch Bugs: Die Kuchendiagramme des Punktes “Netzwerkanalyse starten” weisen bei vielen Seiten aus meiner Sicht fehlerhafterweise nicht aus, was unter Nutzung des Caches passieren würde. Angezeigt wird dort für bestimmte Sites “Antworten aus dem Cache:0”. Obwohl die reguläre Darstellung der Downloadzeiten für die Seitenelemente mittels “Balkendiagrammen” die Nutzung des Caches mit korrekten Werten ausweist.

Dennoch viel Spaß weiterhin mit FF.