Thursday, March 31, 2011

Timeline analysis on Pauldotcom

Awesome tech segment on  MFT Timeline analysis from the Pauldotcom guys.

The Tech Segment explains how to perform a Timeline analysis  with open-source tools and how to spot anti-forensics techniques like timestamp manipulations.

More information on timestamp manipulation can be found in the Sans Computer Forensics Blog that I already commented in this post.

The original blog post on the Sans Computer Forensics Blog talks about a tool called  mft_parser_cl created by  Mark McKinnon  that has been released for this tech segment. It is really helpful to spot timestamp manipulations, because it is able to pull $FILE_NAME time stamps and put them into bodyfile format so they can be added to the overall time line for analysis.





iPhone forensics with Paraben

Via Infoset Institute.

In this video, we will review the wealth of forensic data stored on an iPhone 3Gs using Paraben’s Device Seizure software.

The following information can be extracted out of the iPhone:
  •  Web browser history
  • A history of all locations looked up on map applications
  • The phone’s serial number and the owner’s public key
  • The call and text history, including call durations
  • Dynamic text which is a wealth of useful forensic information



iPhone Forensics & Data Recovery from darren dalasta on Vimeo.

Tuesday, March 29, 2011

Microsoft has taken down the Rustock botnet

Probably many people is already aware, because this information has created many headlines and hit the media.

Yes, Microsoft has reportedly taken down the Rustock botnet, like they did before with Waledac.

I tend to think It was a coordinated effort with many researchers and institutions involved, but we cannot deny that Microsoft has the needed resources to make it possible :)

This is a small compilation of articles/post I have found on Internet:

General explanation,
http://www.h-online.com/security/news/item/Rustock-botnet-out-of-action-1210450.html

Legal order,
http://www.noticeofpleadings.com/

Complaint,
http://blogs.technet.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-82-95-DCU/2112.2011_2D00_02_2D00_09_2D00_Complaint.pdf


Really nice explanation by Brian Krebs, as always :)
http://krebsonsecurity.com/2011/03/homegrown-rustock-botnet-fed-by-u-s-firms/


Arstechnica
http://arstechnica.com/microsoft/news/2011/03/how-operation-b107-decapitated-the-rustock-botnet.ars

Monday, March 28, 2011

SpyEye Botmasters Try To Sabotage abuse.ch

I have learnt via this post that cybercriminals are trying to sabotage abuse.ch's trackers.

They have added a ddos plug-in to the trojans that will attack  their infrastructure  and also they are trying to introduce legit domains in the tracker by adding them to to the list of drop points.

Extracting Real VNC passwords from the Windows Registry

This post from carnal0wnage explains how to extract the Real VNC passwords from the Windows Registry.

The password is DES encrypted, but  Real VNC is using a hardcoded key. No comments on that ... :)

Thursday, March 24, 2011

CAs being owned and the SSL trust model

I really recommend reading this post from Jacob Appelbaum, if you want to understand the story of the compromised CA.

In a short resume, seems like a CA named COMODO High Assurance Secure Server CA was compromised and the attacker issued valid certificates  with their keys.

I quote Comodo's statement:

One user account in one RA was compromised.The attacker created himself a new userID (with a new username and password) on the compromised user account.

Iit seems that some of the issued certificates where: login.live.com, mail.google.com, www.google.com, login.yahoo.com (3 certificates), login.skype.com and addons.mozilla.org.

So far, with the above commented information, we can discuss how broken is the SSL trust model, since  just one compromised CA can cause a big damage and make possible a MITM attack against a big website like mail.google.com.

But that is not all. It seems that the main browser developers were "silently" issuing patches to blacklist the created certificates until Appelbaum  analyzed the serial numbers, as explained in the post.

Also, the Certificate Revocation Lists (CRL) does not seem to work because the browsers "fail open" by default . It means that the browser will not complain if it cannot check the CRL (the CAs do not seem to help a lot to get things better) and the certificate will be blindly accepted, as explained here.

Finally, Comodo  seems to blame the Iranian government  because the attack came from an Iranian IP address,  but in my opinion it does not mean that the Iranian government is behind.

Wednesday, March 23, 2011

Windows Integrity Levels explained

This post from the Internet Storm Center explains the  concept of the Integrity Levels, that is a tool available  on Windows Vista, 7 and 2008.

Integrity levels can restrict one process from interacting with another process even if both processes are running under the same user account and even if the user has administrative privileges. 

Basically,  a process running under a lower integrity level will be limited in the way it can interact with process that run in a higher integrity level, regardless the access rights. This can be really helpful to mitigate a possible exploitation.

This is why it's advantageous to run the processes that are likely to be targeted by exploits under the Low integrity level. For instance, if a browser running under the Low integrity level gets exploited, the attacker's payload will have a hard time injecting itself into the majority of other processes or modifying critical files.

It seems it is a key tool used to create sandboxes in  Internet Explorer,  Chrome and the new Adobe Acrobat.

The article links to the following blog post written by Didier Stevens and called Integrity Levels and DLL Injection. It describes how this feature blocks a DLL injection attempt from a Low Integrity process to another with Medium Integrity.

Network Sniffers Class

Via IronGeek, an awesome list of tools and videos that will help you do sniffing and MITM attacks.

I link to the videos in Vimeo, but his site is really worthy :)



Sniffers Class Part 1 from Adrian Crenshaw on Vimeo.



Sniffers Class Part 2 from Adrian Crenshaw on Vimeo.



Sniffers Class Part 3 from Adrian Crenshaw on Vimeo.

Snort and Sguil easy installation with a Slackware Linux ISO

Via the Internet Storm Center, there is a  Slackware Linux ISO installation with Sguil ready to use.

More info here [pdf]

DNS Prefetching implications

DNS Prefetching can be a nice feature to speed up browsing, but it can cost a big headache and a big bill as well, if you keep a website with many visits.

Via this post you can learn what happen when your website has many subdomains and these are prefetched for each visit. This is really important if your paying for your DNS services and the number of queries sent matters. Firefox seems to be even worse, because it tries to resolve IPV6 addresses (AAAA query) for each subdomain as well.

It seems that the browsers understand a standard tag that permits to disable the prefetching.

More info on Controlling DNS Prefetching

Thursday, March 17, 2011

bad password implementations and brute-force attacks

These serie of posts [ 1 , 2 ]  from SkullSecurity is really enlightening.

I understand that the main error here is using a small seed.  I am not an expert , but I understand that the number of possible passwords (the universe) directly depends on the used seed. Therefore, if we use 1,000,000 as a seed, we will have only have one million passwords, that can be easily pre-calculated (a pair of password, md5-hash) and used in an offline attack with John the Ripper.


The attack in the second post is fairly similar, but it ends up with a really small universe of only 15,993 possible passwords, due a really bad implementation, that even permits an easy and successful online attack.

The attack consists of grabbing  the HTML output corresponding of a failed login and then comparing the HTML output of each brute force attempt against it. It the md5sum does not match, the password is valid.

Analyzing malware packaged in malicious PDF files

Great post from research.zscaler.com

It explains how to analyze a  PDF that contains malicious code.  The following steps are followed during the analysis.


- Analyze/Extract the different objects from the PDF file.  The file contains javascript code in this case.
- Use Malzilla to evaluate the javascript code and extract the shellcode that is Unicode encoded.
- Decode the shellcode to obtain a valid executable binary.
- Use a debugger ( OllyDbg) to analyze the binary. The analyst extracts the XOred code from the binary.
- Use a debugger again to analyze the extracted code. It contacts a website to download the second stage and infect the host computer.

Wednesday, March 9, 2011

DLP is the next Silver Bullet

I think I really do not need to explain what DLP is, unless you have been disconnected for many years.

I was astonished when I first read this post from the Internet Storm Center. The post describes a setup of Snort running in  a bridge and inspecting the traffic between the Corporate Network and the border router (fair enough).


Then, the following rule is used as an example to catch a possible data ex-filtration.

alert ip 192.168.1.0/24 any -> any any (msg:”Data Loss from inside the network”; content:"Company X - Confidential"; rev:1)

I am not an expert in security and you do not have to trust my words, but I think that deploying a device in front of the border router and with this kind of signatures, is only going to catch the more Naive users.

A skilled attacker will encode/encrypt/partition the data and 'act' like a normal user in order to bypass this kind of rules. Therefore,  we are just having a false sense of security.

I think, the only way to detect a skilled attacker is by knowing your network and applying the ideas explained in  this book  from Richard BejtlichExtrusion Detection: Security Monitoring for Internal Intrusions

Monday, March 7, 2011

Linux Support to Volatility

Looks like people is working to support Linux memory images in Volatility. You can find more information in the  volatility blog and the attrc's blog (developer).

attrc's blog post is specially interesting because explains the currently implemented functionalities.

There is also this nice blog post that explains how to use the new Linux support on Volatility  to resolve the last Honeynet challenge.

Tuesday, March 1, 2011